Sunday, January 31, 2016

The common belief that more immigrant workers depress native workers’ wages or employment is not a good representation of what happens

See New Evidence on Immigrants and Jobs: A large influx of Cubans to Miami did not depress the wages or employment of low-skill American workers by Giovanni Peri And Vasil Yasenov in the WSJ. Mr. Peri is chairman of the economics department at the University of California, Davis, where Mr. Yasenov is a Ph.D. candidate. Excerpt:
"We have reanalyzed the Mariel episode using the largest and most representative annual sample of high-school dropouts from the May/ORG Current Population Survey. It includes 44 cities among which a recently developed statistical methodology allows the researcher to identify those whose labor markets behaved as closely as possible to Miami’s between 1972 and 1979. We then compared the average wages and employment rates of low-skill workers in Miami with such a control group after 1979.

Our results—released as National Bureau of Economic Research Working Paper No. 21801 on Dec. 15—confirm Mr. Card’s original study. There is no evidence that Miami’s low-skill workers experienced wage or employment decline relative to those in our control group of cities in 1980, 1981 or 1982. We also analyzed different subgroups—males, females, Hispanics and non-Hispanics—and did not find any significant wage effect in Miami after 1979.

This result suggests that the common belief that more immigrant workers depress native workers’ wages or employment is not a good representation of what happens. Earlier research by one of us has shown that native workers do not suffer the negative impact of arriving immigrants because they take different jobs. Moreover, their arrival stimulates productivity and growth in the economy.

Miami’s experience after the Mariel boatlift suggests that an influx of refugees from Syria to the U.S. would have no significant economic impact on American workers."

Adam Smith On Free Trade & Knowledge

From Don Boudreaux of Cfe Hayek.
"from page 546 of Doug Irwin’s excellent essay “Adam Smith and Free Trade,” which is chapter 32 in Ryan Patrick Hanley, ed., Adam Smith: His Life, Thought, and Legacy (2016) (citation omitted; link added):
Smith drew on the notion that the division of labor was the key driver of productivity improvements.  Increasing productivity allowed more output to be produced from the same amount of capital and labor inputs.  Such improvements were the basis for rising living standards.  Because the division of labor is limited by the extent of the market, free trade widened the market and therefore permitted a more refined division of labor that improved productivity.  In particular, free trade facilitated the exchange of knowledge across countries about new production methods and business practices.
While Smith did not adequately emphasize the indispensability of creative, entrepreneurial innovation to mass flourishing – or to the launch of what Deirdre McCloskey calls “the Great Enrichment” – what Smith did emphasize is also vitally important and typically overlooked in political and public discussions of international trade.  Not only are physical resources and human labor put to more productive uses when specialization deepens and when competition for consumer patronage is not artificially muted by economically meaningless political borders, but knowledge itself – that most precious and misunderstood ‘input’ into the operation of any commercial society – is spread more widely, effectively, surely, and quickly when trade is free.

In short, to restrict trade is to restrict the flow of knowledge.  It is to make people less informed and, over time, stupider than they would otherwise be."

Saturday, January 30, 2016

Revisiting Gore’s Hurricane Prediction

By Marlo Lewis, Jr. of CEI.
"In An Inconvenient Truth (pp. 94-95), Al Gore blamed global warming for Hurricane Katrina and the devastation of New Orleans. Not in so many words but through heavy-handed insinuation no movie goer could miss.
It seemed plausible because Gore invoked an “emerging consensus linking global warming to the increasing destructive power of hurricanes . . . based in part on research showing a significant increase in the number of category 4 and 5 hurricanes.”

The research to which Gore alluded was Webster et al. (2005), a study which found a significant increase in the number and percentage of category 4 and 5 hurricanes during 1970-2004. The study was hotly debated at the time. For example, on the same day Science magazine published the Webster study, climatologist Patrick Michaels published a critique. Michaels showed that, in the Atlantic basin—the hurricane formation area with the best data over the longest period—the “trends” observed by Webster et al. disappeared once data going back to 1940 were included. Roughly the same number and percentage of intense hurricanes occurred during 1940-1970 as occurred during 1970-2004.

This week’s edition of CO2Science.Org reviews “Extremely Intense Hurricanes: Revisiting Webster et al. (2005) after 10 Years,” a study by Phil Klotzbach of Colorado State University and Christopher Landsea of NOAA/NWS/National Hurricane Center.

Klotzbach and Landsea examine whether the “trends” found by Webster et al. continue after an additional 10 years of data. The two researchers find that “the global frequency of category 4 and 5 hurricanes has shown a small, insignificant downward trend while the percentage of category 4 and 5 hurricanes has shown a small, insignificant upward trend between 1990 and 2014.” They further report that “Accumulated cyclone energy globally has experienced a large and significant downward trend during the same period.” In other words, there has been a large decrease in the overall destructive power of hurricanes based on an assessment of the number, strength, and duration of all individual hurricanes worldwide.

Klotzbach and Landsea conclude that the intense-hurricane trends observed by Webster et al. were primarily due to “observational improvements at the various global tropical cyclone warning centers, primarily in the first two decades of that study.”

Healthcare License Turf Wars: The Effects of Expanded Nurse Practitioner and Physician Assistant Scope of Practice on Medicaid Patient Access

By Edward J. Timmons of Mercatus.
"Occupational licensing poses a significant barrier to professionals in many industries and has recently come under increased scrutiny as researchers debate its costs and benefits. Increasing licensing requirements for healthcare professionals in particular may be promoted as a measure to improve the quality of care, but the main effect may be to raise costs for patients.

A new study for the Mercatus Center at George Mason University explores how the licensing requirements for physician assistants and nurse practitioners affect medical outcomes for Medicaid recipients. It finds that prohibiting physician assistants from prescribing drugs to patients significantly raises costs, by more than 11 percent on average, translating to about $109 in extra expenses for each Medicaid beneficiary. Relaxing these restrictions would result in savings for Medicaid beneficiaries and would not cause any changes to the availability of health care.

To read the study in its entirety and learn more about its author, Edward J. Timmons, see “Healthcare License Turf Wars: The Effects of Expanded Nurse Practitioner and Physician Assistant Scope of Practice on Medicaid Patient Access.”

THE GROWING IMPORTANCE OF PHYSICIAN ASSISTANTS AND NURSE PRACTITIONERS

Both the nurse practitioner and the physician assistant professions have grown as consumer preferences have evolved in response to declining access to services. The healthcare industry has faced the following pressures:
  • Many Americans would prefer to see either a nurse practitioner or a physician assistant instead of having to wait for an available physician.
  • The ability of both professions to respond to increases in demand is limited by occupational requirements that state governments impose.
State governments have imposed increasingly strict training requirements on both professions, which may include requiring workers to obtain a master’s degree before practicing, and many states have imposed additional restrictions on nurse practitioners’ and physician assistants’ ability to prescribe drugs to patients. In the past decade, states have gradually granted more autonomy to these professionals when it comes to prescribing drugs, but there still remains some variation in how their professions are regulated.
  • Two states (Florida and Kentucky) currently do not allow physician assistants to prescribe drugs, while the other 48 do under the supervision of a physician.
  • Two states (Alabama and Florida) currently do not allow nurse practitioners to prescribe drugs, 31 allow it under the supervision of a physician, and the other 17 grant nurse practitioners full prescribing authority.
 KEY HIGHLIGHTS

Study Design

This is the first study to estimate the effects of expanded scope of practice on Medicaid patients. It examines spending on prescription drugs and outpatient claims from 1999 to 2012. The study first controls for differences between states such as the state’s unemployment rate and profession density. Then, to measure what effect expanded scope of practice has on medical care, the study looks at how restrictions on prescribing drugs affect the following areas:
  • Cost of providing care. The cost of care is measured by total Medicaid claims per beneficiary, total outpatient Medicaid claims per beneficiary, and total prescription drug Medicaid claims per beneficiary.
  • Access to care. Access to care is measured by total Medicaid claims and total care days.
Expanded Scope of Practice Reduces Costs and Doesn’t Harm Access to Care

Comparing how scope of practice differs among states and has changed over time reveals evidence similar to that shown by the earlier literature on occupational licensing:
  • Allowing physician assistants to prescribe drugs is associated with a reduction in the cost of medical care. This reduction is quite large, ranging from 11.8 percent to 14.4 percent, depending on the specification. But the evidence is less convincing that allowing nurse practitioners to prescribe drugs would bring as large a reduction in costs for beneficiaries.
  • Broader scope of practice for both physician assistants and nurse practitioners is not associated with any changes in access to care. In other words, the concern that relaxing restrictions would harm access is unfounded.
  • Taxpayers also stand to benefit from relaxing occupational licensing laws, because relaxing these laws would reduce the cost of delivering Medicaid services to low-income Americans.
 POLICY RECOMMENDATIONS

State policymakers (and taxpayers) interested in reducing the cost of care for citizens on Medicaid should consider relaxing restrictions on nurse practitioners and physician assistants. The body of research on this topic suggests that allowing nurse practitioners and physician assistants broader scope of practice has little impact on the quality of care delivered, increases access to health care, and also potentially reduces the cost of providing health care to patients. Research shows that broadening the scope of practice for these professions is beneficial for consumers in the healthcare market."

Friday, January 29, 2016

The warmest interval of the 20th century is not unique (tree ring evidence in Nepal)

See Four Centuries of Spring Temperatures in Nepal. By Craig D. Idso of Cato.
"In the past two decades, much scientific research has been conducted to examine the uniqueness (or non-uniqueness) of Earth’s current climate in an effort to discern whether or not rising atmospheric CO2 concentrations are having any measurable impact. Recent work by Thapa et al. (2015) adds to the growing list of such studies with respect to temperature.

According to this team of Nepalese and Indian researchers, the number of meteorological stations in Nepal are few (particularly in the mountain regions) and sparsely distributed across the country, making it “difficult to estimate the rate and geographic extent of recent warming” and to place it within a broader historical context. Thus, in an attempt to address this significant data void, Thapa et al. set out “to further extend the existing climate records of the region.”

The fruits of their labors are shown in the figure below, which presents a nearly four-century-long (AD 1640-2012) reconstruction of spring (Mar-May) temperatures based on tree-ring width chronologies acquired in the far-western Nepalese Himalaya. This temperature reconstruction identifies several periods of warming and cooling relative to its long-term mean (1897-2012). Of particular interest are the red and blue lines shown on the figure, which demark the peak warmth experienced during the past century and the temperature anomaly expressing the current warmth, respectively. As indicated by the red line, the warmest interval of the 20th century is not unique, having been eclipsed four times previous (see the shaded red circles) in the 373-year record – once in the 17th century, twice in the 18th century and once in the nineteenth century. Furthermore, the blue line reveals that current temperatures are uncharacteristically cold. Only two times in the past century have temperatures been colder than they are now!

Figure 1. Reconstructed spring (March-May) temperature anomalies of the far western Nepal Himalaya, filtered using a smoothing spline with a 50 % frequency cut off of 10 years. The red line indicates the peak temperature anomaly of the past century, the blue line indicates the current temperature anomaly, the shaded red circles indicate periods in which temperatures were warmer than the peak warmth of the past century, and the shaded blue circles indicate periods during the past century that were colder than present. Adapted from Thapa et al. (2015).

Figure 1. Reconstructed spring (March-May) temperature anomalies of the far western Nepal Himalaya, filtered using a smoothing spline with a 50 % frequency cut off of 10 years. The red line indicates the peak temperature anomaly of the past century, the blue line indicates the current temperature anomaly, the shaded red circles indicate periods in which temperatures were warmer than the peak warmth of the past century, and the shaded blue circles indicate periods during the past century that were colder than present. Adapted from Thapa et al. (2015).

In light of the above facts, it is clear there is nothing unusual, unnatural or unprecedented about modern spring temperatures in the Nepalese Himalaya. If rising concentrations of atmospheric CO2 are having any impact at all, that impact is certainly not manifest in this record."

The US has less income inequality today than in 2000

By James Pethokoukis of AEI.
"I gave a brief talk earlier today about income inequality to the National Academy of Social Insurance. (My co-speaker was my frequent CNBC debate partner Jared Bernstein.) And I think the talk was well-received. In it, I made a point to highlight two data points or data sets that often get ignored.

012816income

First, high-end income inequality has risen a lot in recent decades. According to the World Wealth and Income Database — a resource assembled by a number of inequality researchers including French economist and best-selling author Thomas Piketty — the top 1% income share since 1979 has surged to 21.24% from 9.96%, the top 0.1% to 10.26% from 3.44%, and the top 0.01% has more 4.89% to 1.37%.

Less frequently noted is that inequality today is actually lower than it was in 2000 (although the peak was in the 2005-2007 period). Overall it’s been up and down this century, but lower now than 15 years ago — as the above shows (using WWID data). I might also point out that the big rise in inequality during the 1990s was matched by fast rising incomes, showing you can have both. Real incomes grew by a cumulative 32% from 1993-2000.

Second, another common inequality ratio is the comparison of big company CEO pay to that of the average worker. It was 373 last year, according to the AFL-CIO. Now as my AEI colleague Mark Perry points out, you really shouldn’t compare the wages of the average worker to the compensation of CEOs at a few hundred, large companies. In 2014, the BLS reported that the average pay for America’s 250,000 chief executives was only $181,000. So the CEO-to-worker pay ratio for the average CEO compared to the average worker is only about 4, not 373.
But let’s take that 373 number. The pay ratio was actually higher in 2000 at 383, slipping to 351 in 2007, then to 193 in a recessionary 2009. Here is economist Steven Kaplan from a recent podcast chat with me:
Pethokoukis: Every year one of the unions trots out the stats saying the average CEO makes, whatever, 400 times the average worker…. [and] CEOs seem to make a lot more versus the workers in this country, than some other advanced economies, like Japan. So why do American executives seem to make so much more than in some other very wealthy advanced countries?
Kaplan: I would say that, number one, Japan is perhaps not the country you want to emulate over the last 20 years.  I think if you were to look at the United Kingdom, you would see trends very similar to our own. I think if you were look at Western Europe, you know, there’d be more variation, but you’d see at a number of the larger multinational companies where the executives are mobile, the pay has gone up. And it may not be at US numbers, but it’s getting closer there.
So I think around the world, you’ve seen CEO pay go up and is – may not be, again, equal to the US, but it’s certainly going up because of these same forces [of technology and globalization]. And you also see, particularly in Western Europe you see a lot of the top executives going to work for private equity funded firms, where they are paid more like US executives. And because, again – and that’s actually a very useful example for the private equity firms because the private equity investors are not people who are – tend to overpay people. And the incentives they give to their CEOs, whether they’re in the US or Europe, are pretty much the same.
The point here is that two common inequality metrics show less inequality today than when the decade started. Now maybe those numbers will climb ever higher in the years to come, as we move further beyond the Great Recession. But what isn’t speculative is that the economy remains stuck in 2% mode. (And maybe not growing at all in the last quarter of 2015.) It would be great if we had more prosperity to share, wouldn’t it?"

Thursday, January 28, 2016

Misdeeds in The Revenant spurred by absence of property rights, not capitalism gone wild

By Jonathan Fortier of the Fraser Institute.
"It is the season for American film awards, and the glitterati are all abuzz with adoration for The Revenant, which won a Golden Globe for best picture and is nominated for 12 Oscars. As usual, Hollywood’s stars are using their work as a platform to bash capitalism. The Revenant director Alejandro Iñárritu stated in a Guardian interview that his film attempts to portray the roots of capitalism, about the single-minded attempt to profit from the cutting of trees and the killing of animals and exploitation of the natives. Still further, according to Iñárritu, the early 19th century frontier life is the foundation for many of the ills of contemporary capitalism: “This is the seed, for me, of the capitalism that we live in now: completely inconsiderate of any consequences for nature.”
Leonardo DiCaprio (who stars as the film’s protagonist) “shared” his Golden Globe with indigenous people around the world, and used the stage to decry corporate exploitation of native people. Ravina Bains, associate director of aboriginal policy at the Fraser Institute, has written elsewhere on this site about DiCaprio’s comments, and suggests that we might more profitably begin by considering the troubled problem of private property rights amongst native populations, rather than corporate exploitation.

Capitalism is an economic system that depends on institutional arrangements, namely the rule of law and private property rights (there are others, but those two are foundational). Early 19th century America had neither of these things in the way we think of them now. The frontiersmen preceded the rule of law (or its enforcement), and it was unclear precisely how to think of Native American property rights. The vast tracts of land were thought of as limitless resources owned by no one. As we have learned from the former Soviet Union, China and other communist countries, it’s not capitalism that most devastates the environment, but socialism.

Indeed, the absence of private property rights in socialist and communist societies creates a “tragedy of the commons” scenario where no one is motivated to protect resources and everyone is motivated to get as much as they can before others. This is quite graphically portrayed in The Revenant, with groups of trappers sitting amongst piles and piles of bloody beaver hides preparing them for transport and sale. The appropriation of animals may have been motivated by greed, but it was the absence of property rights and a lack of respect for the native’s property rights (not “capitalism”) that resulted in the massive overkilling. (Similarly, the burning of the Amazon rainforest for cattle ranching and farming can be better explained by the absence of private property rights than a sort of “capitalism gone wild.”)

One of the fictions about Native American Indians is that they lived peaceful lives with no notion of private property before the arrival of western Europeans. But war amongst native tribes was common, and we know that the supposed collectivism was a myth. Native Americans had personal property rights (in artifacts such as weapons and clothing) and land-use rights (for farming, hunting and fishing) even if those rights were sometimes seasonal and based on a nomadic lifestyle.

Native people and those struggling in poverty in the developing world are done a great disservice by DiCaprio, Ińárritu and others who trot out the tired (and wrong) clichés about capitalism. For, as Richard Pipes, Thomas Bethell and Hernando de Soto (amongst many others) have argued, it is capitalism (and its attendant institutions of private property and the rule of law) that can best improve their lives."

Wednesday, January 27, 2016

Pay For Performance Might Be Working In Education

See IMPACT is Working by Alex Tabarrok of Marginal Revolution.
"In Launching the Innovation Renaissance I wrote:
…teacher pay in the United States seems more like something from Soviet-era Russia than 21st century America. Wages for teachers are low, egalitarian and not based on performance. We pay phys ed teachers about the same as math teachers despite the fact that math teachers have greater opportunities elsewhere in the economy. As a result, we have lots of excellent phys ed teachers but not nearly enough excellent math teachers….
Soviet style pay practices helped to eventually collapse the Soviet system and the same thing is happening in American education. Michelle Rhee is no longer the DC Chancellor but IMPACT, the teacher evaluation system developed under her tenure, is in place. IMPACT uses student scores to evaluate teachers but also five yearly in-class evaluations, three from the school administrator and two from master educators from outside the school. Evaluations are meant not only to reward but also to discover and spread best teaching practices.
The results from IMPACT are starting to come in and they indicate that pay for performance is encouraging low quality teachers to leave, good quality teachers to get better, and high quality teachers to continue teaching and improve even further.

Perhaps not surprisingly the schools with the poorest students see the most teachers leave and they also see the largest gains in student performance as average teacher quality rises. From a new NBER paper by Adnot et al.:
More than 90 percent of the turnover of low-performing teachers occurs in high-poverty schools, where the proportion of exiting teachers who are low-performers is twice as high as in low-poverty schools.
…Our estimates indicate that there are consistently large gains from the exit of low-performing teachers in high-poverty schools. In math, teacher quality improves by 1.3 standard deviations and student achievement by 20 percent of a standard deviation; in reading these figures are 1 standard deviation of teacher quality and 14 percent of standard deviation of student achievement.
These are big effects especially when multiplied over many generations of students."

Paul Krugman on French economic policies

From Scott Sumner at EconLog.
"A finance student from Coventry sent me Paul Krugman post from 1997, which has some interesting things to say about today.
Fifteen years ago, just after François Mitterrand became president of France, I attended my first conference in Paris. . . . The only thing I do remember is a conversation over dinner (canard aux olives) with an adviser to the new government, who explained its plan to stimulate the economy with public spending while raising wages and maintaining a strong franc. 
To the Americans present this program sounded a bit, well, inconsistent. Wouldn't it, we asked him, be a recipe for a balance of payments crisis (which duly materialized a few months later)? "That's the trouble with you Anglo-Saxon economists--you're too wrapped up in your theories. You need to adopt a historical point of view." Some of us did, in fact, know a little history. Wasn't the plan eerily reminiscent of the failed program of Leon Blum's 1936 government? "Oh no, what we are doing is completely unprecedented."
Something similar happened to the Hollande government, which is not that unlike the earlier governments headed by Blum and Mitterrand. All three governments were led by the Socialist party. Krugman then discusses France's supply-side problems:
To an Anglo-Saxon economist, France's current problems do not seem particularly mysterious. Jobs in France are like apartments in New York City: Those who provide them are subject to detailed regulation by a government that is very solicitous of their occupants. A French employer must pay his workers well and provide generous benefits, and it is almost as hard to fire those workers as it is to evict a New York tenant. New York's pro-tenant policies have produced very good deals for some people, but they have also made it very hard for newcomers to find a place to live. France's policies have produced nice work if you can get it. But many people, especially the young, can't get it. And, given the generosity of unemployment benefits, many don't even try.
These supply-side problems largely explain why France's unemployment rate is roughly twice as high as in Germany. (Germany reports 2 rates, for reasons I'll never understand.) Here's the conclusion, written a few years before the euro was created:
But let us not blame French politicians. Their inanities only reflect the broader tone of economic debate in a nation prepared to blame its problems on everything but the obvious causes. France, say its best-selling authors and most popular talking heads, is the victim of globalization--although adroit use of red tape has held imports from low-wage countries to a level far below that in the United States (or Britain, where the unemployment rate is now only half that of France). France, they say, is the victim of savage, unrestrained capitalism--although it has the largest government and the smallest private sector of any large advanced country. France, they say, is the victim of currency speculators, whose ravages President Chirac once likened to those of AIDS. 
The refusal of the French elite to face up to what looks like reality to the rest of us may doom the very European dreams that have sustained the nation's illusions. After this last election it is clear that the French will not be willing to submit to serious fiscal discipline. Will the Germans still be willing to give up their beloved deutsche mark in favor of a currency partly managed by France? It is equally clear that France will not give up its taste for regulation--indeed, it will surely try to impose that taste on its more market-oriented neighbors, especially Britain. That will give those neighbors--yes, even Tony Blair--plenty of reason to hesitate before forming a closer European Union.
But if it turns out that Chirac's political debacle is the beginning of a much larger disaster--the collapse of the whole vision of European glory that has obsessed France for so long--we can be sure of one thing: The French will blame it all on someone else.
The eurozone had two problems, a severe mismatch in the supply-side of the various eurozone economies, and a shortfall in total aggregate spending. The combination was disastrous, both a deep recession and an uneven recession---creating internal conflict. Interestingly, Krugman doesn't anticipate the overall shortfall in AD (nor did I), but rather the one-size-fits-all problem. 
Krugman was skeptical about the euro project, but didn't anticipate disaster:
Now a unified European market is a pretty good idea. There is even a reasonable case for unifying Europe's currencies--although there is also a good case for doing no such thing.
That was also my view. We both saw the one-size-fits-all problem, due to bad supply-side policies in the more socialist parts of Europe, but neither of us anticipated that the ECB would be so contractionary. 
It's too bad that Krugman no longer does this sort of blogging. It's kind of fun to go back and read posts from a time when "socialism" was still a dirty word in America, not a policy that most Democrats have a favorable view of."

Tuesday, January 26, 2016

The Government Poisoned Flint’s Water—So Stop Blaming Everyone Else

A failure of local government, brought on by public employee pensions.

By Robby Soave of Reason.
"Flint, Michigan, was a sickly town long before residents discovered something toxic in their water. The city’s appallingly high crime rate makes it one of the most dangerous places in the country. Its automobile manufacturing industry declined and disappeared decades ago, plunging Flint into a depression from which it never recovered. Its residents are poor. And the local government is so badly in debt that the state had to appoint an emergency financial manager in 2011. Flint is Detroit without the historic appeal. You wouldn’t want to live there. You wouldn’t even want to visit.
On top of all that, local authorities were recently forced to admit that Flint’s drinking water is contaminated with lead. The new water source might also be linked to 77 recent cases of Legionnaire’s disease (resulting in 10 deaths) in the area.

The #FlintWaterCrisis has captured the nation’s attention: many pundits have seized upon the fact Michigan is governed by a Republican, Rick Snyder, and have thus spun the disaster as one primarily caused by conservative indifference to poor black people. During last Sunday’s Democratic debate, Hillary Clinton explicitly blamed the crisis on Snyder’s leadership:
I spent a lot of time last week being outraged by what's happening in Flint, Michigan, and I think every single American should be outraged. We've had a city in the United States of America where the population which is poor in many ways and majority African American has been drinking and bathing in lead-contaminated water. And the governor of that state acted as though he didn't really care.
He had a request for help and he had basically stone walled. I'll tell you what, if the kids in a rich suburb of Detroit had been drinking contaminated water and being bathed in it, there would've been action.
She reiterated this stance during an interview with MSNBC’s Rachel Maddow, who holds the same view. Michael Moore, who hails from Flint, all but accused Snyder of pouring lead in the water supply himself. Elsewhere at Salon, writer Elias Isquith blamed “austerity,” since the root of the problem was the decision to seek a more efficient, cheaper water supply. That decision was not made by Snyder, nor was it made by his emergency financial manager, a Democrat. In fact, Flint’s own city council and mayor approved the idea. State treasurer Andy Dillon—also a Democrat—signed off on it.

In hindsight, the execution of the decision to seek a new water supply was a disaster of epic proportions. But it is one entirely caused by government actors—most of them local government actors—and ignored by regulators until it was too late. The people who have thus far done too little to fix the crisis are also government actors—at the local, state, and even federal levels. Flint is mostly a failure of governance, not a failure of markets.

At the same time, let’s not forget the reason why local authorities felt the need to find a cheaper water source: Flint is broke and its desperately poor citizens can’t afford higher taxes to pay the pensions of city government retirees. As recently as 2011, it would have cost every person in Flint $10,000 each to cover the unfunded legacy costs of the city’s public employees.

The #FlintWaterCrisis is not a blueprint for what would happen if libertarians abolished government and let poor people drink poisoned water, as some enemies of free markets are no doubt claiming. Instead, it’s a great example of government failing to efficiently provide even the most basic of public services due to a characteristically toxic combination of administrative bloat and financial mismanagement.

But as long as the media is tossing out blame, perhaps Flint’s public employees—who cannibalized a dying city’s finances—deserve more than just a drop?

Updated at 3:30 p.m. on January 21: Local officials dispute that they played any formal role in the decision to use the Flint River—the source of the contamination—as a water source, instead pinning the blame on the state-appointed emergency manager. The emergency manager, on the other hand, says the decision was made by the city long before his appointment."

The Fed’s Lack of Appreciation for the Healing Power of Markets

By Daniel L. Thornton of Cato.
"In my recent Cato Institute policy analysis, “Requiem for QE,” I analyze the transcripts of the 2008 and 2009 Federal Open Market Committee (FOMC) meetings in some detail.  Among them, the March 2009 transcript stands out as particularly troubling, as it reveals the FOMC’s failure to appreciate an economy’s ability to heal itself through market mechanisms following an adverse macroeconomic shock.

Yet market economies do have self-correcting mechanisms: relative prices change, resources get reallocated, and consumer and business expectations adjust to new realities.  In the case of the financial crisis, expectations had to adjust to the fact that house prices were significantly out of line with economic fundamentals.  As they did,  perceptions of wealth declined in line with house prices.  Workers, particularly those in construction, began the process of acquiring new skills, finding alternative employment, starting new businesses, and so on.  That these self-correction processes were already at work prior to the March 2009 FOMC meeting is one reason why the recession ended just three months later, in June 2009.

The same self-correcting mechanisms can be seen in the very markets in which the financial crisis began.  Put simply, the financial crisis was precipitated by a decline in house prices which, in turn, sparked concerns about the default risk of banks and other financial institutions with large holdings of mortgage-backed securities (MBS).

The problem was that those holding the MBS had no knowledge of the specific real estate underlying the securities.  As a result, once house prices crashed, and mortgage default rates spiked, no one could work out how much a given security was actually worth.  MBS became “toxic assets” that couldn’t be sold on the secondary market.  In consequence, default risk spreads in interbank and other markets in which loans were made to institutions that held large quantities of MBS widened significantly in the early stages of the financial crisis, and subsequently exploded when Lehman made its bankruptcy announcement.

And yet even then, at the very height of the financial crisis, the market’s self-correcting mechanisms were at work.  Financial institutions had begun the process of discovering what specific real estate backed their MBS before Lehman’s announcement, and the process accelerated thereafter.  The success of these efforts is reflected in the fact that many default risk spreads had returned to their pre-Lehman levels (and in some cases to their pre-crisis levels) weeks before the March 2009 FOMC meeting.

Did the FOMC not see that financial markets and the economy had improved significantly by March 2009, or did it just have no confidence in the self-correcting nature of markets and market economies?  The transcripts of the March meeting suggest the second answer.

Early in the discussion, President Plosser noted that he and President Bullard had recommended a change in the proposed policy statement.  The proposed statement, which was distributed to participants prior to the meeting, read: “the Committee anticipates that policy actions to stabilize financial markets and institutions, together with fiscal and monetary stimulus, will contribute to a gradual resumption of sustainable economic growth.”  Plosser and Bullard proposed that the statement be amended to read: “the Committee anticipates that market forces and policy actions will contribute to a gradual resumption of sustainable economic growth.”  Plosser noted that the proposed statement implied that
policy actions alone will stabilize the world.  And, frankly, I think creating an impression that the only game in town is policy actions and that market economies have no contribution to make in this stabilization is setting us up for failure and a credibility problem.  So we added the reference to market forces.
One might suppose that Plosser and Bullard’s recommendation generated considerable discussion and support, but one would be wrong.  Not a single person responded; not Bernanke, not anyone.  It was a Mr. Cellophane proposal — you’d never even know it was made.

Just before the policy vote was to be taken, Bernanke asked if there were any comments.  Despite being summarily ignored, Plosser responded, “I had suggested this notion of putting in market forces in terms of returning to stability.  I didn’t know whether you had forgotten that, but nobody ever commented on it.”  Bernanke asked if people were okay with substituting the sentence.  Governor Tarullo responded, “To what market forces are you referring?”  Governor Kohn then added, “I think what I heard around the table, Mr. Chairman, was not much confidence that market forces are moving in that direction and might even be moving in the other direction.”

“There’s not much confidence that government forces are going to fix it either,” Plosser replied.  President Lacker interjected, “Surely, if the economy recovers, it’s going to be a combination of policy actions and market forces.  Surely that’s the case.”  Bernanke responded, “Well, all we’re saying here is that these things [policy actions] will contribute.  We’re not saying that they’re the only reason.  Let me go on.”  But President Bullard interrupted,
I just want to press on that a bit.  It gives the impression that we’re hanging on a thread as to what the Congress does or what we do or something like that.  I don’t think you want to leave that impression.  Despite what the government does, you might recover faster or you might recover slower, and I think you should leave that thought in the minds of private citizens.
Bernanke replied, “Again, I think what we’re saying here is that we anticipate that these things [policy actions] will contribute to an overall dynamic.”  Bernanke went on with the vote.  The discussion was over.

The fact that only three of the 18 participants spoke out to suggest that the recovery would not be due solely to policy actions is disturbing.  Bernanke’s lack of support for the language is particularly worrisome because in his book, The Courage to Act, he notes that “as an economist, I instinctively trusted markets” (p. 99) and “I thought of myself as a Republican…with the standard economist’s preference for relying on market forces where possible” (p. 108).

Curiously, he did not take a strong stand for “market forces,” when he had the opportunity.  He could have said, “our actions and fiscal policy are only assisting the market.  We certainly don’t want to leave the impression that policy will do it all.”  He could have said this when Plosser first made the recommendation or at any time during the discussion toward the end of the meeting.  But he didn’t.  One is left to speculate why a person who instinctively trusts markets and market forces did not seize the opportunity to make a point about the role markets would play in mitigating the effects of the financial crisis and facilitating recovery.

The suggestion that market forces contribute to the improvement in the economy did not appear in the March 18, 2009, FOMC statement.  Interestingly, however, the phrase “market forces” did appear in the April policy statement.  But it received third billing: “the Committee continues to anticipate that policy actions to stabilize financial markets and institutions, fiscal and monetary stimulus, and market forces will contribute to a gradual resumption of sustainable economic growth in a context of price stability” (italics added).  The statement appeared in the draft language that was distributed to FOMC participants in advance of the meeting.  There was no discussion of the role of market forces at the April meeting or any meeting in 2009.  Was the statement included out of a deep-seated belief in the healing power of markets or merely to appease a small, but vocal, minority?

It is impossible to know for sure.  But there is little doubt that the Committee failed to recognize that healing takes time.  Monetary policy had already eased considerably by March 2009.  The Fed’s balance sheet more than doubled during the six months between September 18, 2008, and March 18, 2009 — increasing from $931.3 billion to $2 trillion.  Instead of waiting and giving these actions, and the market’s own healing power, time to work, the FOMC voted to expand the Fed’s balance sheet by an additional $1.15 trillion.

This action paved the way for the FOMC’s nearly 8-year zero interest rate policy, which has encouraged risk taking, redistributed income to the wealthy, contributed significantly to the rise in equity and house prices (which have surpassed their previous “bubble” levels), and created considerable uncertainty.  If the FOMC had maintained some confidence in markets’ ability to adapt, it would have waited a little longer to act and might have avoided an incredibly long-lived policy that will be extremely difficult to exit."

Monday, January 25, 2016

Keynesian economics and cognitive illusions

From Scott Sumner at EconLog.
"Consider the following two paradoxes: 
1. Falling wages are associated with falling RGDP. Falling wages cause higher RGDP.
2. Falling interest rates are associated with falling NGDP growth. Falling interest rates cause higher NGDP growth.

You often hear people say correlation doesn't prove causation, but rarely do correlation and causation go in opposite directions as often as in these two cases. I've talked a lot about interest rates in this blog, is there anything to be learned by comparing interest rates to wages? I believe the answer is yes.
The following is going to be an ad hoc model, which just happens to be true during most of recent US history. The "never reason from a price change" maxim warns us to not assume that it will always hold true.

Let's suppose that most employment fluctuations are caused by the combination of NGDP shocks and sticky nominal wages. For simplicity, assume that when NGDP falls by X%, nominal wages fall by only one half times X% in the short run, that is, only half as much as would be required to keep the labor market in equilibrium. We might observe three countries that see NGDP plunge by 2%, 10% and 20%. In those three countries nominal wages fall by only 1%, 5%, and 10%, that is by one half as much as NGDP fell. That's what we mean by sticky wages.

Now think about what that implies. The country where wages fell by the most (minus 10%) is the country where wages are the furthest above equilibrium. And that's likely to be the country with the highest unemployment rate. What conclusion would the average person draw from these stylized facts? They'd conclude that wage cuts "don't work". Cutting wages just causes NGDP to fall even further, and is thus self defeating. I'd argue that this is one of the most important themes in Keynes's General Theory.

And yet I believe this view is completely wrong. For the country where NGDP fell by 20%, a larger wage cut, say 15% or 18%, would have moved wages closer to equilibrium, and this would have led to lower unemployment. Keynes had causation exactly backwards, on an issue that is central to his critique of classical economics.

I hope that by now you see the connection to interest rates. Falling interest rates are a sign of a weak economy. As with wage cuts, a Fed decision to cut interest rates makes the economy stronger than otherwise. So then why are falling interest rates usually associated with a weak economy? Because a weak economy puts downward pressure on market interest rates, and the vast majority of Fed rates changes are merely reacting to changes in the economy, not causing them.

Here's a good recent example. Since the December rate increase, short-term interest rates in the fed funds futures market have been trending downwards. Instead of the 4 rate increases in 2016 predicted by the Fed last month, markets are now forecasting only one or two at most. So what are we to make of this change in the expected path of rates? It's theoretically possible that this reflects an expected easing of monetary policy. That is, the Fed is now less likely to raise rates, even assuming no change in the macroeconomic environment. An easier money policy, which would be expected to boost growth.

In fact, it's far more likely that these lower interest rate forecasts are a prediction of a weaker than expected economy, and also a prediction that the Fed will react to the weaker than expected economy by raising rates less that previously expected. Do we have any evidence for this claim? Yes, a mountain of evidence. All sorts of other asset markets are becoming much more bearish about the economy. If the Fed's likely decision to move away from an aggressive path of rate increases really were an expansionary policy, then asset prices would be rising as bond yields fell. But asset prices are falling with bond yields.

Interest rates usually fall when there is a NGDP growth slowdown. And yet it's also true that interest rates are usually too high when there is a NGDP downturn. This implies the Fed's target interest rate is sticky; it falls more slow than would be required to maintain stable NGDP growth. Sound familiar? So periods when interest rates are falling are also periods where interest rates are becoming increasingly too high.

I believe that Keynes noticed that the deeper the depression, the lower the level of interest rates. Because he viewed low rates as being an expansionary monetary policy, he wrongly concluded that interest rate cuts were relatively ineffective in a deep depression. Unlike with wages, he did not wrongly reverse causation; he was too good a monetary economist to do that. (We'd have to wait for the Neo-Fisherians for that error.) But he did become excessively pessimistic about the potency of monetary stimulus in a depression, and for much the same reason that he wrongly thought that wage cuts would be ineffective in a depression.

His erroneous views on wage flexibility led him to reject classical solutions for depressions. In fairness, wage flexibility is not the best solution, even if Keynes was wrong about causation. The second error led Keynes to reject the views of progressives like Fisher, Hawtrey, Cassel, and even the Keynes of the Tract on Monetary Reform, who favored using monetary policy to stabilize the price level (or NGDP.) Keynes wasn't hostile to their suggestion, he just didn't think that monetary policy alone could get the job done.

By the 1990s, New Keynesians had moved away from Keynes on these issues. They thought monetary policy was enough for stabilization of aggregate spending. And they thought wage cuts were expansionary. Due to the zero bound, economists have recently drifted back to old Keynesian ideas. My view is that they were that view was wrong in the 1930s, wrong in the 1990s, and they are wrong today.

PS. When would wage changes be associated with output moving in the opposite direction? Perhaps if they were caused by exogenous policy shocks, such as the higher wages after July 1933 implementation of the NIRA, which slowed the recovery, or the lower wages resulting from Germany's labor market reforms of 2004, which helped boost growth and reduce unemployment."

‘Equal pay day’ this year is April 12; the next ‘equal occupational fatality day’ will be in the year 2027

From Mark Perry.
"Every year the National Committee on Pay Equity (NCPE) publicizes its “Equal Pay Day” to bring public attention to the gender pay gap. According to the NCPE, “Equal Pay Day” will fall this year on April 12, and allegedly represents how far into 2017 the average woman will have to continue working to earn the same income that the average man will earn this year. Inspired by Equal Pay Day, I introduced “Equal Occupational Fatality Day” in 2010 to bring public attention to the huge gender disparity in work-related deaths every year in the United States. “Equal Occupational Fatality Day” tells us how many years into the future women will be able to continue to work before they would experience the same number of occupational fatalities that occurred for men in the previous year.


jobdeaths TopDeaths1
The Bureau of Labor Statistics (BLS) released data last fall on workplace fatalities for 2014, and a new “Equal Occupational Fatality Day” can now be calculated. As in previous years, the chart above shows the significant gender disparity in workplace fatalities in 2014: 4,320 men died on the job (92.3% of the total) compared to only 359 women (7.7% of the total). The “gender occupational fatality gap” in 2014 was considerable — more than 12 men died on the job last year for every woman who died while working.

Based on the BLS data for 2014, the next “Equal Occupational Fatality Day” will occur about 11 years from now ­­– on January 12, 2027. That date symbolizes how far into the future women will be able to continue working before they experience the same loss of life that men experienced in 2014 from work-related deaths. Because women tend to work in safer occupations than men on average, they have the advantage of being able to work for more than a decade longer than men before they experience the same number of male occupational fatalities in a single year.

Economic theory tells us that the “gender occupational fatality gap” explains part of the “gender pay gap” because a disproportionate number of men work in higher-risk, but higher-paid occupations like coal mining (almost 100 % male), fire fighters (94.3% male), police officers (87.6% male), correctional officers (71.4% male), logging (94.6% male), refuse collectors (91.4%), truck drivers (94.2%), roofers (99.5% male), highway maintenance (98.5%), commercial fishing (100%) and construction (97.4% male); see BLS data here. The table above shows that for the ten most dangerous occupations in 2014 based on fatality rates per 100,000 workers, men represented more than 91% of the workers in those occupations for all of the ten occupations except for farming, which is 76.2% male.

On the other hand, women far outnumber men in relatively low-risk industries, often with lower pay to partially compensate for the safer, more comfortable indoor office environments in occupations like office and administrative support (72.9% female), education, training, and library occupations (74.1% female), and healthcare (74.2% female). The higher concentrations of men in riskier occupations with greater occurrences of workplace injuries and fatalities suggest that more men than women are willing to expose themselves to work-related injury or death in exchange for higher wages. In contrast, women more than men prefer lower risk occupations with greater workplace safety, and are frequently willing to accept lower wages for the reduced probability of work-related injury or death.

Bottom Line: Groups like the NCPE use “Equal Pay Day” to promote a goal of perfect gender pay equity, probably not realizing that they are simultaneously advocating an increase in the number of women working in higher-paying, but higher-risk occupations like fire-fighting, roofing, construction, farming, and coal mining. The reality is that a reduction in the gender pay gap would come at a huge cost: several thousand more women will be killed each year working in dangerous occupations.

Here’s a question I pose for the NCPE every year: Closing the “gender pay gap” can really only be achieved by closing the “occupational fatality gap.” Would achieving the goal of perfect pay equity really be worth the loss of life for thousands of additional women each year who would die in work-related accidents?"

Sunday, January 24, 2016

Market Dominance Doesn't Last; Regulation Shouldn't Either

By Iain Murray of CEI.
"One of the justifications for heavy regulation of large companies is that they use market power to crush competition and maintain market dominance. Yet the history of America’s most successful companies—those that make it on to the Dow Jones Industrial Average (DJIA)—doesn’t support that theory. Sustainable competitive advantage is very hard to achieve, even for these titans of industry.
If we look at the history of the DJIA, we can immediately identify several significant changes in its sectoral composition over the years.

The DIJA was first published in 1884. It consisted of 11 companies, eight of which were railroad companies. The index was later expanded to 12 companies, before being expanded to 20 in 1916. The present Dow Jones Industrial Average began in 1928, when the list was lengthened once more from 20 to 30, consisting mostly of manufacturing companies and resource extraction companies such as Bethlehem Steel and Atlantic Petroleum (who?).

After the Second World War, the Dow entered a period of stability that probably still colors people’s perception of it. Indeed, there were no changes to its composition between 1959 and 1976. By 1991, however, the DJIA had diversified away from manufacturing and mining as companies like McDonalds, Walt Disney, and J.P. Morgan entered the index.

The data reveal a declining trend in the dominance of the manufacturing industry. It accounted for almost half (46.7 percent) of Dow companies in 1965, going down to just one-fifth of DJIA companies today. In contrast, the tech and financial industries have seen tremendous growth over the same period. These groups accounted for just 6.6 percent of DIJA companies in 1965, growing to 36.7 percent today. Retail and consumer goods companies have also experienced a twofold increase from 16.7 to 33.3 percent over the same period.

Today, the DJIA is probably more diversified than ever before, with tech companies (Apple, Microsoft), payments networks (American Express, Visa), banks (Goldman Sachs, J.P. Morgan), and pharmaceutical and health companies alongside the traditional conglomerates and manufacturers. Needless to say, there are calls for more regulation of all of these companies.

Out of all the companies that have ever appeared on the Dow Jones Index since it was first introduced, 79 percent are no longer on the Index. The average time a company spends on the index is only 22.6 years. Over half of all the companies ever to appear on the index do not even exist anymore. This all shows creative destruction, innovation, and economic dynamism in action.

Moreover, companies are remaining on the Dow for shorter periods than before. Among all companies only 19 percent have been on the index for more than 50 years, while 45 percent have been for 10 years or less, and 65 percent for 20 years or less.

The latest Dow Jones Industrial Average Index was updated in March, 2015. The current companies’ average stay on the index is 29.2 years. Only four out of the current 30 have been on the index for more than 50 years, while eight companies have been on it for less than 10 years.

When we consider how the composition of the DIJA has changed drastically over time from predominantly railroad companies to manufacturing and resource extraction companies and more recently tech ventures and financial firms we can begin to see how outdated regulations drafted during the Dow’s “heyday” of stability are inappropriate and often detrimental when applied to today’s DIJA companies.

Many of the rules and regulations drafted in the years when the manufacturing industry dominated the DJIA (1920’s-1990’s) fail to keep up with the constantly changing and dynamic nature of the market. Rather than having an experimental environment of learning and feedback whereby rules and regulations emerge from the bottom up and adapt to changes in the market, static regulations approaching a century old that were drafted for a manufacturing based economy are inappropriate in today’s economy and can stifle innovation.

Like common law, and the DJIA itself for that matter, regulation should be a product of spontaneous evolution; such a hands-off, experimental regulatory framework would better facilitate a rapidly changing market, ensuring the highest possible degree of market coordination."

Obama's Econ Advisers: Occupational Licensing Is a Disaster

The spread of licensing has been a cruel mistake

By Mikayla Novak of FEE. Excerpts:
"The report cited numerous problems arising from this increasingly burdensome regulatory practice, which requires ordinary Americans to obtain expensive licenses and permits to perform ordinary jobs.
It is a belated recognition by the administration that government has long been acting against the best interests of workers and consumers."

"Licensing hurts workers

Occupational licensing locks countless of people out of dignified and meaningful job opportunities. The CEA report indicates that more than a quarter of all workers in the United States need a government license or permit to legally work. Two-thirds of the increase in licensing since the 1960s is attributable to an increase in the number of professions being licensed, not to growth within traditionally licensed professions like law or medicine.

The data show that licensed workers earn on average 28 percent more than unlicensed workers. Only some of this observed premium is accounted for by the differences in education, training and experience between the two groups. The rest comes from reducing supply, locking competitors out of the market and extracting higher prices from consumers.

What makes professional licensing so invidious is that it serves as a barrier to entry in the labor market, simply because it takes so much time and money to obtain a license to work.

For young people, immigrants, and low-income individuals, it can be extremely difficult to stump up the cash and find the time — sometimes hundreds or even thousands of hours — to get licensed. The fees to maintain a license can also be exorbitant.

Compounding the problem is that licensing requirements are spreading into more industries, such as construction, food catering, and hairdressing — occupations where it used to be easy to start a career.
Today, there is arguably no more lethal poison for labor market freedom and upward mobility than occupational licensing.

Licensing hurts consumers

Defenders of occupational licensing say that workers need to be licensed because without it consumers would be harmed by poor service.

In the absence of licensing, children will be taught improperly at school, patients won’t get adequate health care in hospital, home owners will not get their leaky sinks fixed, and somebody could fall victim to an improper haircut.

But, in the name of promoting quality, licensing regulations perversely raise costs and reduce choices for consumers.

The CEA concludes that, by imposing entry barriers against potential competitors who could undercut the prices of incumbent suppliers, licensing raises prices for consumers by between 3 and 16 percent. Moreover, the effect of licensing on product quality is unclear. The report notes that the empirical literature doesn’t demonstrate an increase in quality from licensure.

By restricting supply, licensing dulls the incentive for incumbents to provide the best quality products because the threat of new entrants competing with better offerings is diminished.

Perversely, the inflated prices offered by licensed providers may force some consumers to seek unlicensed providers, or to use less effective substitutes, or to do jobs themselves — in some cases increasing the risk of accidents.

In a blow to the notion of efficient government bureaucracy, the CEA indicates that government licensing boards routinely fail in monitoring licensed providers, contributing to the lack of improvement in quality."
"the French are probably paying 20 percent more than they should for the services they get from regulated professions,"

Saturday, January 23, 2016

Putting Oil Prices in Perspective

From Greg Mankiw.
"Given the huge decline in oil prices over the past year, I thought it might be useful to put the current price in some historical perspective. Below is the price of oil relative to wages. Roughly, the graph shows the number of hours a production worker needs to work to make enough to buy one barrel of crude oil. What is most striking to me how little long-term trend there is (despite great volatility) and that the current level is very much normal by historical standards.


"

Regulation and Income Inequality: The Regressive Effects of Entry Regulations

By Patrick McLaughlin and Laura Stanley of Mercatus.
"Launching a business or entering a professional field can be challenging in its own right, but in some countries—including the United States—regulations can make it even more difficult to get started. For example, to become a professional hair-braider or a florist in certain states, a worker must complete hundreds of hours of training and pass multiple exams.

Entry regulations require would-be members of a specific profession to pass exams or meet education or experience requirements in order to obtain a license to work. Proponents claim that such regulations might improve the quality of service, but most studies have shown that there is no relationship between licensing and quality. Entry regulations may, however, increase income inequality by corralling poorer workers into lower-paying, unregulated fields or forcing them to operate illegally and incur the higher costs of doing so. If entry regulations require expensive education, testing, and fees, workers may choose instead to accept jobs that pay less and don’t take full advantage of their skills.

A new study for the Mercatus Center at George Mason University examines the relationship between income inequality and the number of regulatory steps necessary to start a business. Looking at 175 countries and multiple variables, the study finds that there is a positive relationship between entry regulations and income inequality.

To read the study in its entirety and learn more about its authors, Mercatus senior research fellow Patrick A. McLaughlin and Mercatus MA Fellowship alumna Laura Stanley, see “Regulation and Income Inequality: The Regressive Effects of Entry Regulations.”

STUDY DESIGN AND DATA

While previous income inequality research has focused on GDP growth, relative returns on capital and labor, economic freedom, and ethnic heterogeneity, there has been little investigation of the relationship between entry regulations and inequality. This study is the first to perform a cross-country test of this relationship.
  • Data on entry regulations come from the World Bank’s Doing Business dataset.
  • Income inequality is measured by the post-tax, post-transfer Gini coefficient (a standard measurement of a country’s income distribution), which comes from Federick Solt’s Standardized World Income Inequality Database of Gini coefficients.
  • Additionally, income inequality can be measured using the World Top Incomes Database, which provides data on the shares of income going to the top 1, 5, or 10 percent of workers across countries over an expansive time period.
KEY FINDING: ENTRY REGULATIONS CAN INCREASE INCOME INEQUALITY

Requiring a greater number of steps to open a business is associated with higher levels of income inequality.
  • An increase of one standard deviation in the number of steps necessary to legally open a business is associated with a 1.5 percent increase in the Gini coefficient (i.e., an increase in income inequality) and a 5.6 percent increase in the share of income going to the top 10 percent of earners.
  • While this finding does not imply causality, there is no theory that suggests plausible reverse causality: that greater income inequality causes entry regulations. Instead, the evidence suggests that a greater number of entry regulations leads to greater income inequality.
POLICY RECOMMENDATIONS

Income inequality is a primary focus of many politicians and policymakers. One possible cause of income inequality is entry regulations. Countries with more burdensome entry regulations—that is, countries where red tape makes it harder to set up a business—tend to also have greater income inequality. Policymakers should focus on three main policy goals to mitigate these effects:
  • Avoid establishing ineffective entry regulations. Regulations should not be promulgated if they do not solve a demonstrable social problem.
  • Consider alternative policies to address relevant social problems. Alternative policies could include mandatory disclosures, registration, certifications, and titling, among others.
  • Examine current licensing restrictions for unintended regressive effects. Retrospective reviews of current licensing restrictions can help determine whether regulations have resulted in higher-quality service or have instead been ineffective."

Friday, January 22, 2016

Oxfam’s misleading wealth statistics

By Felix Salmon of Fusion.
"Oxfam is out with a new report about the world’s wealth. This is big news: it led the front page of the Guardian this morning, and there has been lots of similarly uncritical reception of the findings.
The problem is that this version of Oxfam’s report is just as crap as the last version, which came out a year ago. Even worse, in fact, since it adds a bunch of silly extrapolations.

So, this is a recapitulation and update of a post I wrote at Reuters in the wake of the last report. (And, my first attempt at evergreen journalism!)

The meme is older than the 2014 report. It started, back in 2011, with the Waltons: six members of the family, we were repeatedly told, were worth as much as the bottom 30% of all Americans combined. In the Oxfam version, the world’s top 80, or top 67, or top 85 richest people have the same wealth as the bottom half of the global population. The latest report has a new twist: it adds up the total wealth of the top 1%, and tries to work out how that compares to the wealth of the bottom 99%.
How does Oxfam arrive at its conclusions? When it’s just adding up a few dozen people at the very top, it’s easy: they just start at the top of the Forbes billionaires list, and start counting. As for the rest of the data, it comes from Credit Suisse, which puts out an annual Global Wealth Databook. Oxfam then uses the Credit Suisse data to derive all the rest of its numbers: it does no real empirical work of its own.

This chart, for instance, comes entirely from Credit Suisse data:

Screen Shot 2015-01-19 at 5.25.12 PM

Oxfam reads a lot into this chart. Indeed, its happy to take the datapoints from 2010 onwards and start extrapolating wildly:

Screen Shot 2015-01-19 at 5.26.39 PM

But this is a little crazy. The lines on the Oxfam charts are thin, which gives the impression that the numbers are precise. But of course the numbers are anything but precise: the error bars on all these data points are huge, which means that the variation over the years could easily just be statistical noise.

Beyond that, the whole concept of adding up wealth is fundamentally extremely flawed. For instance, do you notice anything odd about this chart?

Screen Shot 2014-04-04 at 2.02.31 PM.png
The weird thing is that triangle in the top left hand corner. If you look at the tables in the Credit Suisse datebook, China has zero people in the bottom 10% of the world population: everybody in China is in the top 90% of global wealth, and the vast majority of Chinese are in the top half of global wealth. India is on the list, though: if you’re looking for the poorest 10% of the world’s population, you’ll find 16.4% of them in India, and another 4.4% in Bangladesh. Pakistan has 2.6% of the world’s bottom 10%, while Nigeria has 3.9%.

But there’s one unlikely country which has a whopping 7.5% of the poorest of the poor — second only to India. That country? The United States.

How is it that the US can have 7.5% of the bottom decile, when it has only 0.21% of the second decile and 0.16% of the third? The answer: we’re talking about net worth, here: assets minus debts. And if you add up the net worth of the world’s bottom decile, it comes to minus a trillion dollars. The poorest people in the world, using the Credit Suisse methodology, aren’t in India or Pakistan or Bangladesh: they’re people like Jérôme Kerviel, who has a negative net worth of something in the region of $6 billion.

America, of course, is the spiritual home of the overindebted — people underwater on their mortgages, recent graduates with massive student loans, renters carrying five-figure car loans and credit-card obligations, uninsured people who just got out of hospital, you name it. If you’re looking for people with significant negative net worth, in a way it’s surprising that only 7.5% of the world’s bottom 10% are in the US.

And as you start adding all those people up — the people who dominate the bottom 10% of the wealth rankings — their negative wealth only grows in magnitude: you get further and further away from zero.

The result is that if you take the bottom 30% of the world’s population — the poorest 2 billion people in the world — their total aggregate net worth is not low, it’s not zero, it’s negative. To the tune of roughly half a trillion dollars. My niece, who just got her first 50 cents in pocket money, has more money than the poorest 2 billion people in the world combined.

Or at least she does if you really consider Jérôme Kerviel to be the poorest person in the world, and much poorer than anybody trying to get by on less than a dollar a day. All of whom would happily change places with, say, Eike Batista, even if the latter, thanks to his debts, has a negative net worth in the hundreds of millions of dollars.

There’s no doubt that the trillions of dollars owned by the world’s top 1% constitute an enormous amount of money: there is an astonishing amount of wealth inequality in the world, and it’s shocking that just 80 people are all it takes to get to $1.9 trillion. You could spread that money around the “bottom billion” and give them $1,900 each: enough to put them squarely in the fourth global wealth decile.

Oxfam claims that the $1.9 trillion owned by the world’s top 80 people is equal to the amount of wealth held by the bottom 50% of the world’s population. But look at just the top two-fifths of the 3.5 billion people referred to in the Oxfam stat. That’s 1.4 billion people; between them, they are worth some $2.2 trillion. And they’re a subset of the 3.5 billion people who between them are worth $1.9 trillion. As you add more people at the bottom of the wealth distribution, the Oxfam aggregate doesn’t go up, it goes down.

The first lesson of this story, then, is that it’s very easy, and rather misleading, to construct any statistic along the lines of “the top X people have the same amount of wealth as the bottom Y people”.

The second lesson of this story is broader: that when you’re talking about poor people, aggregating wealth is a silly and ultimately pointless exercise. Some poor people have modest savings; some poor people are deeply in debt; some poor people have nothing at all. (Also, some rich people are deeply in debt, which helps to throw off the statistics.) By lumping them all together and aggregating all those positive and negative ledger balances, you arrive at a number which is inevitably going to be low, but which is also largely meaningless.

The Chinese tend to have large personal savings as a percentage of household income, but that doesn’t make them richer than Americans who have negative household savings — not in the way that we commonly understand the terms “rich” and “poor”. Wealth, and net worth, are useful metrics when you’re talking about the rich. But they tend to conceal more than they reveal when you’re talking about the poor."
See also Be careful with that viral statistic about the top 1% owning half the world’s wealth by Ezra Klein of Vox

The Austrian theory of the business cycle continues its comebacka

From Tyler Cowen at Marginal Revolution.
"Except no one seems interested in calling it by that name.  Here is the new NBER working paper by David López-Salido, Jeremy C. Stein, and Egon Zakrajšek, “Credit-Market Sentiment and the Business Cycle”:
Using U.S. data from 1929 to 2013, we show that elevated credit-market sentiment in year t – 2 is associated with a decline in economic activity in years t and t + 1. Underlying this result is the existence of predictable mean reversion in credit-market conditions. That is, when our sentiment proxies indicate that credit risk is aggressively priced, this tends to be followed by a subsequent widening of credit spreads, and the timing of this widening is, in turn, closely tied to the onset of a contraction in economic activity. Exploring the mechanism, we find that buoyant credit-market sentiment in year t – 2 also forecasts a change in the composition of external finance: net debt issuance falls in year t, while net equity issuance increases, patterns consistent with the reversal in credit-market conditions leading to an inward shift in credit supply. Unlike much of the current literature on the role of financial frictions in macroeconomics, this paper suggests that time-variation in expected returns to credit market investors can be an important driver of economic fluctuations."

Thursday, January 21, 2016

What happened in 2015 is what is supposed to happen when an El Niño is superimposed upon a warm period or at the end year of a modest warming trend

See The Current Climate of Extremes by Patrick J. Michaels and Paul C. "Chip" Knappenberger of Cato.
"What a day yesterday! First, our National Oceanic and Atmospheric Administration (NOAA) announced that 2015 was the warmest year in the thermometric, and then the Washington Post’s Jason Samenow published an op-ed titled “Global warming in 2015 made weather more extreme and it’s likely to get worse.”
Let’s put NOAA’s claim in perspective.  According to Samenow, 2015 just didn’t break the previous 2014 record, it “smashed” (by 0.16°C).  But 2015 is the height of a very large El Niño, a quasi-periodic warming of tropical Pacific waters that is known to kite global average surface temperature for a year or so. The last big one was in 1998.  It, too set the then-record for warmest surface temperature, and it was (0.12°C) above the previous year, which, like 2014, was the standing record at the time.

So what happened in 2015 is what is supposed to happen when an El Niño is superimposed upon a warm period or at the end year of a modest warming trend.  If it wasn’t a record-smasher, there would have to be some extraneous reason why, such as a big volcano (which is why 1983 wasn’t more of a record-setter).

El Niño warms up surface temperatures, but the excess heat takes 3 to 6 months or so to diffuse into the middle troposphere, around 16,000 feet up.  Consequently it won’t fully appear in the satellite or weather balloon data, which record  temperatures in that layer, until this year.  So a peek at the satellite (and weather balloon data from the same layer) will show 1) just how much of 2015’s warmth is because of El Niño, and 2) just how bad the match is between what we’re observing and the temperatures predicted by the current (failing) family of global climate models.

On December 8, University of Alabama’s John Christy showed just that comparison to the Senate Subcommittee on Space, Science, and Competitiveness.  It included data through November, so it was a pretty valid record for 2015 (Figure 1).



Figure 1. Comparison of the temperatures in the middle troposphere as projected by the average of a collection of climate models (red) and several different observed datasets (blue and green). Note that these are not the surface temperatures, but five-year moving average of the temperatures in the lower atmopshere.

El Niño’s warmth occurs because it suppresses the massive upwelling of cold water that usually occurs along South America’s equatorial coast.  When it goes away, there’s a surfeit of cold water that comes to the surface, and global average temperatures drop.  1999’s surface temperature readings were 0.19°C below 1998’s.  In other words, the cooling, called La Niña, was larger than the El Niño warming the year before.  This is often the case.

So 2016’s surface temperatures are likely to be down quite a bit from 2015 if La Niña conditions occur for much of this year.  Current forecasts is that this may begin this summer, which would spread the La Niña cooling between 2016 and 2017.

The bottom line is this:  No El Niño, and the big spike of 2015 doesn’t happen.

Now on to Samenow. He’s a terrific weather forecaster, and he runs the Post’s very popular Capital Weather Gang web site.  He used to work for the EPA, where he was an author of the “Technical Support Document” for their infamous finding of “endangerment” from carbon dioxide, which is the only legal excuse President Obama has for his onslaught of expensive and climatically inconsequential restrictions of fossil fuel-based energy.  I’m sure he’s aware of a simple real-world test of the “weather more extreme” meme.  University of Colorado’s Roger Pielke, Jr. tweeted it out on January 20 (Figure 2), with the text “Unreported. Unspeakable. Uncomfortable. Unacceptable.  But there it is.”



Figure 2. Global weather-related disaster losses as a proportion of global GDP, 1990-2015.

It’s been a busy day on the incomplete-reporting-of-climate front, even as some computer models are painting an all-time record snowfall for Washington DC tomorrow.  Jason Samenow and the Capital Weather Gang aren’t forecasting nearly that amount because they believe the model predictions are too extreme.  The same logic ought to apply to the obviously “too-extreme” climate models as well, shouldn’t it?"