Welcome to the RSEC Blog!
Here you will find opinion pieces on some of the latest and most important regulatory issues facing the country. The point of these blogs is to raise awareness and help facilitate a bigger conversation that hopefully leads to actions that ensure the regulatory issues are resolved in a way that leads to the betterment of society. Please comment and provide feedback!
By Michael Swetnam
Yesterday’s Supreme Court ruling gave due process, as guaranteed by the Constitution, back to the citizens of the USA.
On February 9th, 2016 the Supreme Court ruled that the EPAs attempt to enforce the regulations that implement the Clean Power Plan was an unconstitutional action. The high court ruled that forcing US citizens and companies to comply with government regulations before those regulations are tested in court is illegal. This is a precedent setting change that will have significant impacts on the Federal regulatory system.
One of the fundamental tenets of US policy is due process of law. The regulatory flaw fixed by the Supreme Court is that federal agencies were requiring citizens to follow rules and regulations, sometimes costing billions of dollars, before they had their day in court; before due process. This ruling restores a basic part of our American legal system by allowing citizens to argue the legality of a regulation before they are forced to become obedient to it.
Tuesday’s Supreme Court ruling gave back to the citizens of the US due process that is guaranteed in our Constitution. No one in America can be forced to give up their rights, liberty, or property without due process. The Supreme Court struck the bell of freedom yesterday and reminded the Federal government that its rights to regulate the population is subservient to that population and the rule of law through due process. This is a key principle of the American legal system.
There are many flaws and problems with our government, but it is a comforting thought that we can still rely on our courts to balance the flaws of the legislature and the executive branch when needed.
By Charles Mueller
What happened in Flint, Michigan over the last 17 months was criminal and people should be going to jail. There were so many failures by so many different actors at so many different levels of leadership in this story that it makes your head spin when you finally realize what happened. The governor’s move to switch the town of Flint’s water supply without ensuring appropriate precautions were in place was criminal. The inability for the leadership at the Michigan Department of Environmental Quality to guarantee standard corrosion prevention and control measures were implemented during the switch was criminal. The failure of the EPA Region 5 Administrator to not see this coming and not act swiftly on it when the literal data was dropped on her doorstep was criminal. The situation is a stark reminder that it doesn’t matter how good our laws are or how perfect our regulations are drafted, these things will never be able to protect the public from the harm caused when bad leaders make bad choices.
How did this happen? How could such terrible failures occur? Were the regulations unclear or was the system itself was just broken? There are 49 other states and 9 other Regional EPA administrators who prevent such failures everyday following the same laws and rules the leaders charged with protecting the citizens of Flint, Michigan swore an oath to uphold. What does that say? The damage this failure has caused is immeasurable. The actions of these people will lead to deaths of unborn children because mothers chose to drink tap water instead of bottled water while pregnant. It will lead to irreversible brain damage in children who were just trying to replenish their thirst after playing hide-n-seek outside with friends. Who knows, maybe the next great American thinker, next great American innovator or the next great American leader was just lost as a result of these bad choices by bad leaders. Justice must be served and measures must be taken to give confidence back to the public that such a travesty will never happen again.
Our Constitution says those elected and appointed to positions of leadership are supposed protect the public by doing three things: 1) create policies that establish the right boundaries for society; 2) enforce these boundaries in a way that is rational and fair; and 3) make sure the people have due process when accused of crossing these boundaries. Our laws and regulations describe how the leaders we elect and appoint are supposed to create this safe and prosperous environment. While we should always strive to use science, technology and lessons of the past to craft the best policies, we will never be able to write words that prevent bad leaders from making bad choices. This is the ultimate lesson to be learned from what has taken place in Flint, Michigan.
The power to govern over the American people is an awesome power gifted to those leaders we elect and appoint to our federal, state and local governments. When our leaders fail to uphold their sworn duties they must be held accountable to ensure the American people can trust their government. Heads should be rolling in Flint, Michigan and people should be in handcuffs.
By Sabrina Katz
On June 29, 2015, in a 5-4 ruling, the Supreme Court struck down the EPA’s regulation of mercury emissions from coal and oil power plants. The decision was based on the EPA’s failure to consider the potential costs of a rule in making its initial decision to regulate the contaminant. The EPA decided to regulate mercury after the results of its 1998 health study showed that mercury emissions at the current levels posed a substantial risk to public health. The EPA also believed that a future regulation of mercury would be technologically feasible and that the best, most cost-efficient way of regulating the contaminant would be determined during the rulemaking process.
The decision essentially came down to whether the EPA correctly interpreted 42 USC 7412 (n)(1)(A), which instructs the EPA to make the decision whether or not a regulation was “appropriate and necessary” based on a study of the hazards to public health that contaminants like mercury pose. The text in question reads: "The Administrator shall regulate electric utility steam generating units under this section, if the Administrator finds such regulation is appropriate and necessary after considering the results of the study required by this subparagraph." At this stage, the EPA considered the results of the study but not the potential costs to industry in deciding to regulate mercury emissions, believing that the law did not require the consideration of costs at this stage. The EPA did consider costs at a later point in the rulemaking process, but it did not see any explicit or implied statutory reason that costs should be considered in the first stage of the process.
The opinion of the court, written by Justice Scalia, was that cost is an implied factor in determining whether regulation is "appropriate and necessary"; therefore, the EPA’s failure to consider costs before deciding to regulate violated this statute. However, in the dissenting opinion, Justice Kagan argues that cost considerations are not necessary at this stage: "At the initial stage, EPA must decide whether to regulate a source, based solely on the quantity of pollutants it emits and their health and environmental effects." Scalia illustrates his point by likening the EPA’s decision to regulate without considering costs to a person who finds it “’appropriate’ to buy a Ferrari without thinking about cost, because he plans to think about cost later when deciding whether to upgrade the sound system”. Kagan, in turn, rejects the analogy of luxury items like a sports car to the regulation of dangerous pollutants. She instead compares the EPA to “a car owner who decides without first checking prices that it is ‘appropriate and necessary’ to replace her worn-out brake-pads, aware from prior experience that she has ample time to comparison-shop and bring that purchase within her budget”.
This decision has a variety of possible implications for the future of the federal rulemaking process. Michigan may represent a pull away from the precedent set by Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc. Chevron is often interpreted as giving overwhelming deference to the regulating agency, unless the agency does something that explicitly violates the statute that gives it authority. The precedent set by Chevron has prompted many courts to give deference to agencies in how regulatory decisions are made and how the rulemaking process is performed.
If courts begin to use the precedent established by Michigan, they will be able to exercise much greater authority over federal regulation through their decisions. The Supreme Court struck down the mercury rule because it inferred that the Clean Air Act implies that the consideration of costs is necessary during the first step in the rulemaking process. Thus Michigan established precedent that an agency must address unspoken or implied requirements when determining whether or not to make a rule. This precedent may allow future judges to strike down a rule based on his or her particular interpretation of the authorizing statute.
Moreover, the Michigan decision was not based on whether but on when the EPA considered costs in the rulemaking process. This potentially means that judges can now examine each step of the rulemaking process individually as opposed to looking at the process as a whole. Allowing this could lead to precedent that a rule can be overturned in the courts for not following a specific set of steps or considerations in a specific order. Since Michigan was based on the EPA’s failure to follow an implied step of the rulemaking process, it suggests that not all mandatory steps of the rulemaking process are enumerated and precedent exists for a judge to rule on how a rulemaking decision should be made.
Michigan also suggests that the courts, not the agencies, have the ability to determine what rules must be made. One could argue that the vague term “appropriate and necessary” in the Clean Air Act was used to give the EPA the freedom to determine what rules it determined were “appropriate and necessary”. However, Michigan may allow courts to determine for themselves what rules fit the term based on any number of specific but unspecified factors. Regulatory agencies by definition are parties given authority to interpret and implement legislation; however, this decision may give the courts the authority to determine the way an agency interprets law and what rules are needed to implement it.
The precedent set by Michigan may give substantial rulemaking authority to the judges reviewing rules. For agencies, this decision may require them to follow a certain rulemaking process that is more specific and ambiguous than the steps laid out in administrative law. Agencies must also be careful to simultaneously consider all relevant factors in rulemaking at all points in the process. It is unclear to what extent the Michigan decision will affect rulemaking and regulatory law for the EPA and other agencies in the years to come. However, the precedent set in Michigan seems to havethe potential to trump the precedent set by Chevron and transform the role of the judicial branch in the federal rulemaking process.
By Brian Barnett
It is surprising that there are no well-defined metrics for determining whether a federal regulation is working as intended or even if it is being properly enforced. Federal agencies create regulations through a convoluted, drawn-out process, but then most of these regulations do not receive any further scrutiny once they are enacted. It appears that “post-regulation analysis” is limited to those rules that have a specific mandate to update themselves on a regular schedule (e.g.: the Safe Drinking Water Act 1996 Amendments require the EPA to reevaluate the maximum acceptable levels of water contaminants in drinking water every six years). However, even these updates lack any consistent measurement of efficacy.
This raises the question: for all of the work that goes into fighting over a regulatory action or policy, who pays attention to the actual outcomes once it is passed? There is a wealth of information available that analyzes the back-and-forth process of internal, OMB, legislative, and judicial review that weighs down a regulatory action, with incredibly complex cost-benefit analysis, regulation negotiation, etc. However, it seems that the body of research on rulemaking as well as the actual activity of the federal regulatory agencies all focus on creation side of the process. The question of who analyzes regulations after they have been passed still remains. All of the above regulatory activities occur prior to the final posting of a regulation, and while they are surely helping to resolve known issues with a regulation’s implementation, they are not guarantors of its actual success. Reviewing the regulation and all that it intends to accomplish is useful, but it is ultimately inconsequential if the regulation does not produce the desired outcome that prompted the creation of the rule to begin with.
With all of the cost-benefit analysis tools that are at the disposal of regulatory agencies, why aren’t more of them turning their attention to the efficacy and outcomes of their rules? Why don’t the federal regulatory agencies research and keep tabs on this information? Being able to evaluate and learn about outcomes of a promulgated regulation would have a huge impact on the success and efficiency of all future regulations. These analyses would provide regulatory agencies with evidence-based findings that would inform their development of future regulations and updates to current rules. Furthermore, performing these sorts of analyses would allow for greater insights into the degree of success in enforcing these federal regulations.
Some federal agencies, like the EPA, are attempting to evaluate the effectiveness of their rules retrospectively, but their current processes to do this still fall short of what is needed. Recent analyses from groups like the OECD have conceptualized the process of systematic evaluation of regulatory performance review, but such analyses are not translating into meaningful regulatory policy change for Federal agencies.
The Regulatory Science and Engineering Center (RSEC) is currently conducting an in-depth study of the rule-making process across several Federal agencies and developing a framework for understanding the “regulatory science” of how regulations are made. As part of this study RSEC will begin to develop metrics for evaluation of regulation and its efficacy. Based on its current findings, RSEC recommends that federal regulatory agencies draw lessons from existing efforts to evaluate regulatory efficacy and develop a framework for evaluating and enforcing rules and regulations once they have been promulgated. It is only through a better understanding of the efficacy of regulation that we can improve the real-world impact of Federal agencies in pursuing their missions.
By Charles Mueller
The current draft of the 21st Century Cures Act that is being promoted by the House Energy & Commerce Committee is not going to deliver on its promise to keep this country as the leader in medical innovation. It can’t because the draft legislation completely misses the real problem the US faces when it comes to medical innovation. The problem is our approach to developing new medicines, devices and treatments. This draft legislation, while certainly changing the medical regulatory environment, does nothing to fix the system.
The case for this argument is actually made in the first white paper put out by the House Energy & Commerce Committee. The white paper starts by pointing out that of the 10,000 known diseases there are currently only 500 treatments. It then goes on to quote Dr. Francis Collins as saying it takes “around 14 years and $2 billion or more” to develop a new drug and “more than 95% of [such] drugs fail during development”. The House Energy & Commerce Committee believes the solution to this problem is to create a regulatory environment that will allow drugs and devices to be essentially streamlined to market (aka the patient). Somehow nobody thought twice about the last thing Dr. Francis Collins said, “95% of the drugs fail during development”. When you spend approximately $30 billion a year on medicine and you are only getting 500 treatments for 10,000 diseases (and 95% of your projects fail), the system is not just bad, it is also bad science. Our approach medical science and developing medical treatments is not working.
Some might argue that this draft legislation does some good things to improve the current system, and while this may be true, improving a broken system still leaves you with a broken system. This idea that streamlining the development of drugs and devices is going to rapidly close the gap between our current treatments for diseases and the known diseases is just plain wrong. Streamlining the same old process will just mean you get the same results faster. This is the definition of insanity and if we really want to fix our approach to developing medical breakthroughs, we need to totally change the system. Let’s try DARPA processes, directed research on larger scales, or other disciplines like physics to search for new treatments. Let’s get outside the box and examine other models to approaching the development of medical science and treatments that we have not tried yet.
The 21st Century Cures Act has great intentions, but as written, it will accelerate the pace of drug development failure. This piece of legislation will flood the market with new treatments like lung cancer drug Iressa, which was fast-tracked by the FDA because it can shrink tumors despite the fact it does nothing to improve patient survival. Streamlining drug and device development is not going to bring 21st century cures. Streamlining existing processes is not the answer.
What we need is a new system, a new approach that will actually do what the 21st Century Cures Act promises: Build a foundation for 21st century medicine.
By Charles Mueller
The FDA should require manufacturers of genetically modified organisms (GMOs) to classify and label their products accordingly. I say this because this is the right solution to deal with the ongoing debate that is occupying way too much of the Government’s and public’s time. There is no reason that Congress needs to be debating bills that would ban or require the labeling of foods that have had their DNA precisely modified by today’s best biotechnologies. There is a simple, common sense solution that the FDA is completely capable of handling on its own, without new legislation.
First off let’s clarify what this is really about. There are two camps at war here, the industry camp that is afraid labeling their food products that have been modified will hurt the sale of these products and the public who is really scared, probably for good reason (see DDT), of new things introduced by industry into their foods that they don’t know about or understand. The referees in this war of ignorance are the FDA, EPA, and USDA and unfortunately they have yet to really take control.
The reality for industry here is that if done right, and explained to the public right, the labeling of food that has been genetically modified has the potential to boost sales not hurt them. There are many great things that genetic modification can do that are almost certainly safe, like the Arctic Apple that is engineered not to brown after it has been cut. As PhD in Biochemistry I would be surprised if science one day showed these were harmful in regards to health, regardless what metric you used to define “health”. Industry should be using the labeling argument to promote the great things that precision genetic engineering is capable of. Rather then spend millions lobbying Congress to ban the labeling of GMOs they should be spending those millions developing food products that will foster a better tomorrow.
Similar to how industry needs to quit pretending labeling will hurt sales, the public needs to quit pretending that all GMOs are going to hurt them. Generalizing all GMOs as being safe or not safe is the problem to begin with. Doing so ignores that the issue is complex and there exists a spectrum of the types of genetic modifications introduced in foods referred to as GMOs. Some genetic modifications simply enhance certain desired traits in our foods, leading to better taste and/or nutritional content, similar to the breeding practices we’ve been using for years. Others change growth properties or introduce pesticidal or antimicrobial properties. The health implications for these various changes are not equivalent and it is unfortunate the scientific community has yet to properly address this reality. The fact that GMOs have ever been tested for safety in human’s and our current methods for evaluating safety are inadequate is only helping to fuel the fear that all GMOs are potentially harmful when in many cases it is likely they aren’t.
So what is the solution here? How do we reconcile the fear of profit loss with the fear of safety? One solution would be to have the FDA develop a scale that quantifies the nature of the genetic modification in the food. In the development of this scale, they should be engaging directly with the public and industry to explain and determine the classifications of GMOs. In developing this classification system they should be incorporating the best-available-science into their judgments and identifying gaps in the research so that future studies can address these gaps to improve this system going forward.
Following the development of this classification system, they should create regulation that requires the labeling of GMO food products using this system. Doing this will create transparency about GMOs for both industry and the public. Industry can use the non-controversial labels as a selling tool to be embraced and the public will get a chance to manage their own food risks. The best part of this solution is that it finally becomes possible to monitor human consumption of GMOs and potentially identify any adverse health effects this consumption is causing, which in turn could provide new information to improve the GMO classification system.
The common sense solution to this problem is a regulatory one: Have the FDA require labeling of GMOs based off a classification system derived from the best-available-science and stakeholder input. Failure to do this will ensure that we continue to debate this issue and potentially hold back the uses of many of the benefits precision based genetic modification of food can bring to society. Label the GMOs already, but only after you classify them.