Friday, March 2, 2018

Evidence-based policymaking: Moneyball, or GIGO?


A confluence of recent events around the capital city showed many people wish for better information when making public policy. This may seem surprising--the President's budget as well as the recent tax changes rely on supply-side economics, climate change denial is practically an article of faith for the majority party, and Congress may or may not be able to undo the 23-year-old provision that restricts the Center for Disease Control from researching gun violence--but it's fair to say the desire for data is here... co-existing with a lot of other desires.

As a U.S. Senator from Indiana, Dan Quayle sponsored the
Job Training and Partnership Act of 1983,
based on research showing the effectiveness of job training programs
Some years ago, as part of our long research project on policy making by the President and Congress (see note below), Paul J. Quirk and I examined the government's ability to manage complex or changing information. When issues are technically complex, as most have gotten to be in one way or another, we wrote "they must obtain the necessary information from reliable sources, as well as mastering it to the extent required to make good policy." The challenges are many:
Knowledge in a policy area may be limited because there is a large amount of material presented which is undifferentiated, or because there is not a consensus on how to interpret a given sequence of events.  Constituencies may be unaware of relevant information or refuse to accept it.  Alternatively, new information may pose difficulties for assumptions that have become entrenched among political elites. 
Our conclusions were guardedly optimistic: "[G]overnment is often able to make intelligent policy in the face of complex or changing information.... The key seems to be a predisposition to hear the news. [In most cases we examined,] policy initiative was taken by someone who was already committed to the position supported by the new research. Those people then used the new information to convince the rest of the government to go along with them." What's unstated is there were enough uncommitted people willing to be convinced by the policy entrepreneur. Is that still true, or has politics become completely overwhelmed by rigid constituency positions?

Whatever the goal of a policy (e.g. containing health care costs, reducing barriers faced by small businesses, preventing terrorist attacks) we do better when we know what we're doing. There are reasons why we might not choose the most optimal solution--ideological beliefs, constituency group benefits, social norms--but on the whole someone promoting a policy is better off with the best information they can get.

It gets trickier when, as in most cases, we're talking about assessing government programs that are already underway. Again, if I want to, say, contain health care costs, I should want the best data I can get on how well existing programs are doing that, and if they're not doing very well, I can use those data to learn how to improve performance. If I'm worried that the results will be used to fire me, de-fund my agency, and/or repeal the Affordable Care Act, I'm going to see assessment as a threat. If I have to collect and analyze the data in addition to doing my job, I'm going to see it as a hassle at best if not a suicide mission.
Nick Hart (Source: Bipartisan Policy Center)

Nick Hart and Kathryn Newcomer allow as much. They're the co-authors of a new technical paper from the Bipartisan Policy Center, rolled out this week at a forum co-sponsored by the Forum for Youth Investment. They discuss evidence-based evaluation initiatives undertaken by the administrations of George W. Bush and Barack Obama administrations, noting that both adminstrations tended to use results to guide budgeting decisions. In Newcomer's words, "accountability does tend to trump learning."
(L to R) John Kamensky, Marcus Peacock and Shelley Metzenbaum
at the BPC/Forum for Youth Investment Event
In a follow-up panel, Marcus Peacock, who worked on the Bush efforts, praised President Trump's goal of repealing two regulations for every one promulgated, and called for nurturing "pay-for-success." In response, audience member Christine Heflin (who works in the Commerce Department office of performance, evaluation and risk management) called for finding ways to "reward learning." Panel member Shelley Metzenbaum, who worked on the Obama efforts, agreed, noting that pay-for-success "becomes a signal that we're going to de-fund what's not 'working.'"

I'm with Metzenbaum, particularly in today's fraught political and budgetary environment. It's hard to imagine good-quality evidence emerging when programs are justifiably concerned about ideological opponents who want them to die, not to mention fellow travelers who could use their resources for their own programs. Yet we need good information to achieve public objectives.


(L to R) Tamika Lewis, Andrew Guthrie Ferguson, Cornell William Brooks, Rachel Levinson-Waldman
At a more basic level, we should be suspicious of what Cornell William Brooks calls "the presumed infallibility of data." Brooks is an attorney, pastor, civil rights activist and senior fellow at the Brennan Center for Justice, which hosted a panel this week on how new data collection efforts affect criminal justice. The panel noted that more sophisticated data collection by police departments--and more recently, U.S. Immigration and Customs Enforcement--have resulted in greater attention to poor and minority populations, resulting in more arrests and a longer data trail that makes it harder to get jobs or housing in ensuing years (what panel member Tamika Lewis of the Our Data Bodies Project called "the cycle of injustice").

Panel members seemed divided over whether negative impacts of big data on minority populations were intentional or the result of institutional racism. You don't have to believe in conspiracies, in a country where racial differences are baked into the economy and society, to imagine that they're also baked into metrics from credit scores to risk for violence.

This doesn't mean data are inherently bad, or there are no useful metrics. Both panels talked about the need to include everyone involved throughout the assessment process. Lewis said data could be used to identify institutional racism instead of replicating it; Brooks suggested community members should be involved in deciding what would be measured and how; and law professor Andrew Guthrie Ferguson pointed out that even conclusive data don't necessarily point to a single remedy (more law enforcement), suggesting social workers, pastors and community members as alternatives.



People outside of government could use better data, too. Carl Wallace, shown above at a presentation to 1 Million Cups Fairfax in February, has developed C-Score, an algorithm for evaluating small business proposals he is marketing to banks, incubators and universities. It's a sort of "Moneyball" for entrepreneurship, attempting to aggregate what we know about the prerequisites for small business success, and get away from reliance on hunches however well-based. It occurred to several people in attendance that it could also serve as a diagnostic tool for entrepreneurs themselves who wish to improve their pitches. C-Score will operate at the policy formulation stage, serving the common interests of entrepreneurs, financial institutions and cities in developing a strong base of locally-owned small businesses. It's far from clear that government program evaluations have gotten to similarly common interests among the audiences for their evidence-based policy evaluations.

SEE ALSO:

Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (NYU Press, 2017)

Nick Hart and Kathryn Newcomer, "Presidential Evidence Initiatives: Lessons from the Bush and Obama Administrations' Efforts to Improve Government Performance," Bipartisan Policy Center, 28 February 2018

"The Promise of Evidence-Based Policymaking: Report of the Commission on Evidence-Based Policymaking," September 2017

OLDER SOURCES ON POLICY LEARNING:

Jane Mansbridge, "Motivating Deliberation in Congress," in Sarah Baumgartner Thurow (ed), Constitutionalism in America (University Press of America, 1988)

William Muir, Legislature (University of California, 1986)

Richard Rose, Lesson-Drawing in Public Policy: A Guide to Learning across Time and Space (Chatham House, 1993)

My research with Paul J. Quirk on policy learning was never completed for publication. We did present a paper, "The President and Congress as Policy Makers: Dealing with Complexity and Change," at the Midwest Political Science Association conference, April 15-17, 1999. Quotations are from a book draft.

No comments:

Post a Comment

10th anniversary post: Turn red for what?

(Source: X. Used without permission.) Don’t make me waste a whole damn half a day here, OK? Look, I came here. We can be nice to each othe...