Jared Bernstein: Evidence: Is It Really Overrated? | More on Evidence

Evidence: Is It Really Overrated?

July 4th, 2014 at 11:44 am
By Jared Bernstein

A few weeks ago, during the evidentiary dustup between Piketty and the FT, I quasi-favorably quoted a Matt Yglesias line re empirical evidence being overrated. A number of readers were understandably unhappy with that assertion, arguing that they come here to OTE for fact-based analysis based on empirical evidence (with, admittedly, a fair bit a heated, if not overheated, commentary). If facts all of the sudden don’t matter anymore, why not just call it a day and join the Tea Party?

So let me add a bit more nuance. The statement is about the quality and durability of evidence, which is not only varied, but, at least in the economic policy world, increasingly problematic. A number of developments have significantly lowered the signal-to-noise ratio.

I’d divide the evidence problem into two separable categories. First, statistical issues about what’s “true” and what’s not, and second, ideological ways in which the noise factor is amplified at the expense of the signal. It’s this latter bit that’s arguably gotten worse.

–“Variable coefficients” Sounds wonky, but all’s I’m saying is that relationships change over time. As economies and societies evolve, as globalization increases, as technologies change, as cultural norms ebb and flow, we shouldn’t expect the relationships between inflation and unemployment, minimum wages and jobs, growth and inequality, education and pay, or pretty much anything else to stay the same. It’s rare for the “sign to flip” meaning I expect education and pay to remain positively correlated and inflation and unemployment to remain negatively correlated. But magnitudes change a lot. No elasticity is etched in stone! So be skeptical when someone tells you “this leads to that and therefore this is a good or bad idea!”

That’s “skeptical,” not cynical. Judicious use of data by unbiased practitioners can help inform our understanding of the relationship between this and that, at this juncture in time.

–Complexity As I stressed in the first link above, it is hard to know what to make of empirical evidence when it is based on simulated data, meaning data that the analyst has changed in ways that she believes are necessary (remember, we’re not talking ideological thumb on the scale yet). In my world, this happens a lot with income and wealth data (it was also the basis of the Piketty/FT argument).

Such data are often incomplete in ways that matter. They might, for example, leave out the value of government-provided benefits. So analysts are clearly justified in tacking on such things as the value of food stamps or medical benefits. But how to do so can be deceivingly tricky and there’s often no obvious right answer.

Health care benefits in the US provide the best example. If you just append the market value of coverage to the income of say, Medicaid recipients, as CBO does in some widely used tabulations, you’re inflating the incomes of the poor in ways that reflect not their buying power, but the widely documented excesses in the US health care system, including inflated salaries and over-priced medicines.

One study that employed this and other even more dubious methods ended up publishing results showing that between 1989 and 2007, when every other study showed income inequality growing, the income of the poorest 20% rose the most in real terms while that of the top 5% fell the most (see figure here).

With such analysis proliferating, you’ve really got to know what you’re looking at. I’m also increasingly drawn towards simple data that didn’t involve a lot of researcher manipulation, like these annual wage trends, for example.

–Noisy data I wrote about this yesterday re first quarter GDP—a huge negative outlier—but the fact that statistical evidence always has uncertainty bands around it gets quickly lost in the debate. I try to remind everyone, most notably myself, that the payroll jobs number upon which we all obsess every month has a 90% confidence interval of 90,000 jobs. Imagine how differently yesterday’s big jobs report—288K pop on payrolls—would have gone if the result was 200K, but in fact, that result is within the confidence interval (the CI means there’s a 90% chance that the true change in payrolls was 90K below or above the point estimate).

More so than in the past, we live in a world where money doesn’t just buy power and political access—that’s always been the case. It also buys the answers it wants, from faux-climate-scientists and “think tanks” that can generate whatever result you need. Recall, for example, the Heritage Foundation’s initial projections, later revised (or deep-sixed) as I recall, that “dynamic effects” of one of Paul Ryan’s budgets would drive the unemployment rate down to 2%.

You could just say, “well, I’m not going to believe evidence that comes from people or institutions with an ideological bias.” But that would be wrong. Paul Krugman is a “liberal” but in countless empirical columns and blogs, his analysis has been careful and accurate. The Center on Budget, EPI, CEPR too. Me too. That’s not to say they or I don’t make mistakes, of course. It’s that we’re almost always careful to know the data pretty well and not go beyond it. (I know I should balance the above with conservative examples—I think Chuck Lane at the Post often uses evidence effectively, also Jimmy P and others (Hassett, Strain) at AEI; also Doug Holtz-Eakin on a good day).

So what is a thinking person, who’s not a statistician, to do? Perhaps the answer comes from movie criticism. Before I had kids I used to really enjoy the movies (now I click on Netflix, immediately fall asleep, hopefully waking up in time to catch my laptop before it crashes to the floor). Figuring out which films to go see was vastly aided by knowing which critics to trust.

Same with evidence. I hesitate to start naming names or I’ll leave out some valued assets, but I already named some wonky types above. I’d add VOX, Upshot, Wonkbook, all of which I’d argue exist in no small part for this very reason—as go-to places for quality evidence (now that I’m writing for PostEverything, I’d shamelessly add them too). Dean Baker’s Beat the Press is must reading in this space.

In sum, let me rephrase the original point: careful evidence based on transparent data with respect for confidence intervals and variable coefficients is great. A lot of the rest is overrated.

Reposted with permission from jaredbernsteinblog.com

More on Evidence: The Importance of “Surprising Validators”

July 5th, 2014 at 3:27 pm

“…more people know what scientists think about high-profile scientific controversies than polls suggest; they just aren’t willing to endorse the consensus when it contradicts their political or religious views. This finding helps us understand why…factual and scientific evidence is often ineffective at reducing misperceptions and can even backfire on issues like weapons of mass destruction, health care reform and vaccines. With science as with politics, identity often trumps the facts.”

You might personally understand and even believe the science of evolution or climate change, but if your religion, ideology, or the group with whom you closely identify forbids such beliefs, you’ll find a way to suppress them.

This adds another important layer of complexity to the issues I raised yesterday.  It’s not enough to worry about what constitutes high-quality evidence.  Establishing facts is just the first hurdle.  An even higher hurdle exists when the facts challenge peoples’ belief systems (other commenters made similar points).

One way through this dilemma is for icons of your movement to validate the facts.  This reminded me of related research I’ve come across suggesting that it takes a

to change minds.  That is, if a trusted and elevated leader essentially gives group members permission or clearance to accept facts that already know, they’re likely to do so.

Someone may, for example, really understand and believe, at least in their logical mind, the process of evolution.  But if they’re a member of group—a group that’s deeply important to them—wherein membership means disbelieving evolution, then the desire to maintain the emotional connection with the group will trump the known facts of evolution, despite the fact that the concept is endorsed by their logical mind.

However, if a group leader alters the belief system to allow evolution as an accepted explanation of how things work, then logic meets conviction and facts prevail.  Such leaders work like gate keepers deciding which facts are allowed into the system and which are kept out.

That sounds like awfully tough going, however, especially when group identity is intimately tied up in denying some key set of facts.  Disbelief of climate change or belief in trickle-down economics isn’t a pet, side-theory for anti-environmentalists (e.g., those who profit from extracting fossil fuels) and anti-tax crusaders.  They’re the whole shooting match.

So if you believe this, your job of convincing people with fact-based evidence just got harder.  Not only do you have to boost the weakening signal-to-noise ratio with strong, credible analysis.  You’ve got to convince their leaders to open the gate and let these facts into their system.

Good luck with that…

Reposted with permission from jaredbernsteinblog.com

Leave a Reply

Your email address will not be published. Required fields are marked *