Student papers - Planned Adaptation

Integrated paper option in STPP -- Version 1.1 (updated 4/2/2021)

Instead of completing two shorter term papers, graduate students have the option of asking to write a single, longer paper relating to one of two interrelated topics relating to improved public or private policymaking: Evidence-based Forecasting and Planned Adaptation. These papers are usually about 15 pages in length. A draft version is to be submitted a few weeks before the final product is due. 

Topic A. The Improvement of Forecasts Underlying Policymaking (topic added 2020)

Most policy decisions involve attempts to assess the future select options that will provide more future benefits than future costs. That means that forecasts will be made -- whether quantitative or qualitative. Deliberative forecasting methods have become much more common in recent times.

In concept, it is easy to gauge the accuracy of past forecasts – you compare forecast and actual consequences, and modify the method accordingly. It is anything but clear that such self-correction often happens. Looking at an actual past case may help us learn:

• How often are past forecasts validated?

• What procedural factors tend to favor accurate forecasting?

• What forms of expertise are engaged in more accurate forecasts?

• What practical factors impede the post hoc forecasting errors?

At this stage, we do not know of many writings on the relative accuracy of past forecasts.  One promising book, however, is Superforecasting: The Art and Science of Prediction(Broadway Books, 2015), by Tetlock and Gardner.  The book reports on a series of experimental forecasting tournaments,  for which predicition accuracy is scored. This leads sthe authors to distinguish the more successsful forecasters and their methods.  However, many of the forecasts are for relatively short time spans, and may or may not hold for the longer-term factors involved in arraying the benefits and costs of policy choices.  If you locate helpful sources, let us know.


Topic B. Policymaking as an Adaptive Undertaking

Prof Oye and Dr. McCray have suggested that most work on policy formulation focusses on one-shot policy decisions: a ruling is needed to address a felt need with existing knowledge. What is left out is assurance that new information can and will be accommodated to give better results over the longer haul. Policymaking, thus, should be a matter of “Planned Adaptation? In the future as a matter of once-and-for-All problem solving. Past STPP students have examined several particular policy programs from this perspective.

In general, airline safety measures and US nutrition policy might be thought of as  ends of the spectrum of adaptive processes. The safety record of domestic air carriers shows continuous self-improvement over the decades, but by the 1990s there were still about 4 fatal accidents per year. In the past ten years that rate is close to zero, and there have been no fatal crashes since 2009.

By contrast, the field of nutrition shows little self-improvement. Decades ago, there was a commitment to improving public health by avoiding saturated fats in the American diet. We haven’t seen impressive progress, and in fact, some fats were replaced by transfats. Many years later, transfats were discovered to be more dangerous than saturated fats. Policy learning in this policy area is slow indeed.

We are working to uncover actual cases – teachable cases – of self-improving policy programs, and to understand the disincentives that impede policy improvement over time in other areas.

Forms of Planned Adaptation in Actual Practice

A. Routine Policy Reviews with Assorted Trigger Mechanisms

[1] Strenuous "Never Again" Failure Analyses (National Transportation Safety Board). For most serious air crashes, NTSB conducts a thorough study of root causes of each aviation failure, and then offers ways to ensure that such failures won't be repeated. A simple idea, no? Compliance with these recommendations is not required, but NTSB's trackers report that most NSTB probes' recommendations are implemented anyway. This arrangement has helped improve air travel safety over past loss rates. For example, US carriers have experienced no fatal crashes for the past decade.

[2] Regularly scheduled de novo post-decision re-examinations of significant policies (EPA's critical air pollutants, especially airborne particulates; clinical practice guidelines for cardiology surgery, wildlife management.)

• Under the Clean Air Act, EPA periodically reviews recently-gained knowledge on health effects and control technologies for a handful of prominent air pollutants.

• Similarly, the recent campaign for evidence-based health care led to periodic re-analysis of (“systematic reviews”) best practices for hundreds of medical topics; most are conducted by established non-governmental groups of medical experts. For the case of coronary by-pass surgery, teams of experts from the American College of Cardiology and the American Heart Assn are in the fifth (roughly 5-year) cycle of systematic reviews of (now) over 150 key steps in heart bypass operations, from choice of anesthetics used during a bypass to post-operation followup.

• Under its Adaptive Harvest Management program, the Department of the Interior has routinely re-adjusted its hunting policies based on data on the latest populations data for certain waterfowl.

[3] Event-triggered re-examinations (Federal Reserve Board). When current economic conditions warrant it, The Fed considers whether to raise or lower its prime interest rate for loans.

[4] After Action Reviews (U. S. Army). The Army now routinely convenes participants in combat action to discover what can be learned to improve performance over time. The term 'after action review' seems to have spread to Congress in recent days in connection with the drafting of the corona virus relief legislation.

[5] Regulatory Sunsets and "Look-backs" (caveat:: these are surprisingly rare). One way to stimulate re-examination of the forecasts of the effects of new regulations is to put a sunset date into the rule. Nobody ever seems to do that. Alternatively, a policymaker could require a scheduled look back at the delivered costs and benefits of past rules; but that doesn't happen much, either.

B. Other Teachable Practices, Perhaps

[6] Dedicated research programs that target residual uncertainties in calculated risks and benefits (EPA). EPA generously funded new research on the health effects of airborne particulate matter, particularly on very small particles. Emission standards were subsequently tightened considerably once the science became clearer. This adaptive approach was mandated by Congress under the Clean Air Act's "NAAQS" program

[7] "Versioning" (Internet Task Force). Nowadays it is common that standards relating to data processing are given version numbers from the outset. That practice sends a message that later versions are be expected -- to fix bugs, add features, etc. This sector seems to operate on the premise that tech innovations will continue, and that lock-in needs to be avoided.

[8] Third-party expertise (IIHS, APA). The leading source of automobile safety ratings is the Insurance Industry Highway Safety’s crash program. IIHS has full access to risk expertise in insurance companies, and has designed new types of crash tests. Its ratings are far more visible that those of federal regulators (at NHTSA) and of vehicle manufacturers themselves. The computer gaming industry asked the American Psychology Assn to rate the violence level of games played by young gamers . . . with mixed results.

[9] Policy experiments and "Brandeisism” (Road Safety? State-level marijuana policy?). In the 1920s, Supreme Court Judge Louis Brandeis pointed to state-level decisions as a type of policy laboratory for federal policy-making. Different states may try different policies, and thus provide practical testbeds for use in setting national policy. Few now think that way in Washington, but I sense that some of this policy learning-by-doing may be important in the work of national associations of state officials, in the NASEM's Transportation Research Board, etc. One clear contemporary example is in the regulation of legalized marijuana, where Colorado’s experience informs other states as they follow along.


Some General Hypotheses that May Seem Worth Considering

[1] Adaptive Outcomes Are Improved When Rulemaking and Factfinding are Organizationally Separated.

The FAA decides on the airworthiness of aircraft, but the independent NTSB determines the cause of accidents and recommends remedies. The US Army’s After Action Reviews are not controlled within the command that directed the action. DOT’s car-crashing program seems less influential than the insurance industry’s crash-rating program at IIHS. (However, EPA handles the regular re-evaluations of NAAQS air pollutants – with internal functional separation of risk analysis and compliance factors.)


[2] Professional societies are not regularly involved in policy re-evaluations . . . except in health policy.

The shift toward evidence-based medicine – plus the federal ban on governmental agency writing health guidelines – has led to numerous specialist groups (e.g., the American College of Cardiology) operating regularly-updated Clinical Practice Guideline for their specialty. We have not found that other professional societies successfully evaluate current rules.


[3] Those organizations that set public and private policies are reluctant to reconsider current rules because change makes enforcement actions less fjorceful.

If a rule is changed, those subject to it will feel that it could change again. If a current rule is put under review, those subject to it may feel that it may be weakened or unenforced in the short term. Such effects, however, do not appear to reduce the Fed’s credibility when it changes interest rates.



A Science Policy Practitioner’s Mundane Thoughts about Knowledge and Power.

L. McCray

Disclaimer: I selected MIT for graduate training because its Political Science program offered a science policy option. A richly undeserved national dissertation prize on the politics of regulation did not distract me from a workaday career as a Washington bureaucrat at EPA (I led its regulatory reform program), the outermost White House (I designed a Presidentially approved government-wide initiative that was not bold enough for the next President), and the (very independent) National Academy of Sciences, where I was tasked to form a new Policy Division to oversee consensus studies. I may be self-delusional, but I never felt that social science themes helped me as I plodded along my various bureaucratic halls.

1. About Science Policy Studies. While science policy remains my favorite subject, I have a feeling that it spends a bit too much time considering the best way to decide particular policy decisions. This may occur because those decisions mix scientific judgements and value judgements, and different participants feel that their judgements are being ignored: this makes for a messy process for observers to sort out. My own preference would be that more attention be given to longer-term matters. I think that today’s formal processes for writing policy (for public policy, I mean the Administrative Procedure Act—APA) are pretty good, both in using currently-available knowledge and in treating contending groups fairly.

2. Making Knowledge/Power Interplay a Dynamic Process. While existing processes do a reasonable job in dealing with current knowledge, they show much less respect for future knowledge and evolving future conditions. The APA’s approach, like the judicial system, is a wonderful method for making a reasonable and fair decision in spite of remaining uncertainty about the decision’s actual plusses and minuses. It is not as wonderful in making sure that if the outcome is wrongly guessed, a fix be made. It does not ensure post-decision learning. It is extremely rare that the anticipated costs and benefits of a rule are monitored after the rule is promulgated.

3. Efforts to Make Policy Self-Correcting. The last two American Presidents have addressed this defect, without notable effect. Under President Obama there was a move to re-visit significant rules to see if tune-ups are needed. Under President Trump it was required that to enact a new rule, two old rules would be eliminated. Neither proved to install a self-correcting capacity in the process for significant actions.

4. Targeting Residual Uncertainties. In many many cases, it is impossible to forecast the costs and benefits of a new rule; there are too many unknowns (and, yes, too many unknown unknowns). The natural thing to do in such cases is to consult specialists. Sometimes, specialists feel that because they know more relevant facts, they can predict the future well. That is a leap. It would be helpful if there were some consideration of what we wish we knew, so we could ask what discovery, if funded, might be able to boost the actual benefits, or to curtail the actual costs, of the action. Learning by doing should be added to the process.

5. Supporting applied and basic research to address, over time, continuing knowledge on costs and benefits. While it is true that many forecasters are too confident of their projections, it is also true that people tend to see most future innovations as the unpredictable result of basic research, much of it coming from university scholars. Many of the residual uncertainties may be amenable to ordinary bean-counting – doing surveys, accessing existing data files, etc. Shouldn’t the decisionmaker order such work at the time of a major decision so that a later lookback can take place?

6. About lawyers and non-lawyers as policy designers. The APA was written and is managed by lawyers, and in many agencies, rule-writing is done within the agency’s legal team. Could it be true that lawyers tend to be less uncomfortable with evolving analytics than other professions are?