Friday, June 30, 2017

Why the fear-mongering on Medicaid is totally overblown

By Charles Blahous of Mercatus. Excerpt:
"Before the Affordable Care Act, Medicaid required states to provide coverage for certain groups, including seniors, people with disabilities, pregnant women and families with young children living on incomes less than or near the federal poverty level. The ACA expanded potential coverage to include childless adults with incomes up to 138 percent of the poverty line, but the previously eligible low-income patients who depend on Medicaid for lifesaving treatments would remain covered even if the ACA were fully repealed.

In fact, these vulnerable individuals might even benefit from a repeal in some respects. One controversial provision in the ACA provided a far higher level of federal support for childless adults — who before the expansion had rarely been eligible for Medicaid, regardless of income — than what has been available for the program’s historically eligible population. This imbalance distorted state decision-making, favoring coverage for the expansion population over timely access for the neediest individuals to Medicaid’s limited supply of health services.

The elevated federal payments for Medicaid expansion have also contributed to other problems. For example, some researchers now warn that the expansion has resulted in a shortage of primary-care physicians in Medicaid, although academic studies have produced mixed results.

And in terms of the budget, federal Medicaid costs would rise under current law from $389 billion today to $650 billion annually by 2027 — a growth rate that outstrips our ability to finance it. In both 2015 and 2016, per-capita costs for the Medicaid expansion population came in more than 60 percent higher than previous estimates (largely because states passed on virtually all expansion costs to the federal government). Earlier this month, the chief actuary at the Centers for Medicare and Medicaid Services (CMS) raised projections of expansion’s per-capita costs even further.

The Congressional Budget Office has projected that the pending legislation before Congress would result in large cost savings, primarily by comparing the bills with how Medicaid enrollment would evolve if the ACA remained on the books. That comparison is important, but it obscures how many people would remain on Medicaid’s rolls. In fact, the CMS actuary projects that under the House bill, total Medicaid enrollment will stay roughly constant above 70 million people over the next decade. This is lower than it would be under the ACA, but higher than the enrollment population before the ACA was enacted (roughly 55 million)."

Cash for Corollas: When Stimulus Reduces Spending

From American Economic Journal. By Mark Hoekstra, Steven L. Puller and Jeremy West.


Citation

Hoekstra, Mark, Steven L. Puller, and Jeremy West. 2017. "Cash for Corollas: When Stimulus Reduces Spending." American Economic Journal: Applied Economics, 9(3): 1-35."

Thursday, June 29, 2017

The Minimum Wage: Evidence from a Danish Discontinuity

By Alex Tabarrok.
"In addition to the Seattle study, another minimum wage paper crossed my path this week and it takes a very different approach than much of the literature. In Denmark the minimum wage jumps up by 40% when a worker turns 18. Thus the authors, Kreiner, Reck and Skov, ask what happens to the employment of young people when they hit their 18th birthday? Answer: employment drops dramatically, by one-third.

A picture tells the story. On the left is measured wages by age, the jump up due to the minimum wage law is evident at age 18. On the right is the employment rate–the jump down at age 18 is also evident as is a bit of pre-loss as workers approach their 18th birthday.



The authors have administrative data covering wages, employment and hours worked for the entire workforce of Denmark so their estimates are precise.

Denmark has laws making age discrimination illegal but these do not apply when a young person turns 18 and firms may legally search for under or over-18 age workers.

A variety of restrictions mean that under-18 age workers can do less than adults (e.g. they can’t legally lift more than 25 kilos or have a driver’s license.) Thus, productivity increases at age 18, making the employment loss at this age even more dramatic.

The authors can’t tell for certain if workers are quitting or getting fired but there are few other obvious discontinuities around exactly age 18. Students are eligible for certain benefits at age 18 but the authors are able to look at sub-samples where this objection doesn’t apply and the results are robust.
In a section of the paper that adds important new evidence to the debate, the authors look at the consequence of losing a job at age 18. One year after separation only 40% of the separated workers are employed but 75% of the non-separated workers are employed. Different interpretations of this are possible. The separated workers will tend to be of lower quality than the non-separated and maybe this is correlated with less desire to have a job. Without discounting that story entirely, however, the straightforward explanation seems to me to be the most likely. Namely, the minimum wage knocks low-skill workers off the job ladder and it’s difficult to get back on until their skills improve."
In addition to the Seattle study, another minimum wage paper crossed my path this week and it takes a very different approach than much of the literature. In Denmark the minimum wage jumps up by 40% when a worker turns 18. Thus the authors, Kreiner, Reck and Skov, ask what happens to the employment of young people when they hit their 18th birthday? Answer: employment drops dramatically, by one-third.
A picture tells the story. On the left is measured wages by age, the jump up due to the minimum wage law is evident at age 18. On the right is the employment rate–the jump down at age 18 is also evident as is a bit of pre-loss as workers approach their 18th birthday. The authors have administrative data covering wages, employment and hours worked for the entire workforce of Denmark so their estimates are precise.
Denmark has laws making age discrimination illegal but these do not apply when a young person turns 18 and firms may legally search for under or over-18 age workers.
A variety of restrictions mean that under-18 age workers can do less than adults (e.g. they can’t legally lift more than 25 kilos or have a driver’s license.) Thus, productivity increases at age 18, making the employment loss at this age even more dramatic.
The authors can’t tell for certain if workers are quitting or getting fired but there are few other obvious discontinuities around exactly age 18. Students are eligible for certain benefits at age 18 but the authors are able to look at sub-samples where this objection doesn’t apply and the results are robust.
In a section of the paper that adds important new evidence to the debate, the authors look at the consequence of losing a job at age 18. One year after separation only 40% of the separated workers are employed but 75% of the non-separated workers are employed. Different interpretations of this are possible. The separated workers will tend to be of lower quality than the non-separated and maybe this is correlated with less desire to have a job. Without discounting that story entirely, however, the straightforward explanation seems to me to be the most likely. Namely, the minimum wage knocks low-skill workers off the job ladder and it’s difficult to get back on until their skills improve.

More than 60,000 Canadians left the country for medical treatment in 2016

From The Fraser Institute.
"An estimated 63,459 Canadians travelled abroad for medical care in 2016—up nearly 40 per cent over the previous year, finds a new study released today by the Fraser Institute, an independent, non-partisan Canadian public policy think-tank.

“More and more Canadians clearly feel they must leave the country to get the medical care they need,” said  Yanick Labrie, Fraser Institute senior fellow and co-author of Leaving Canada for Medical Care, 2017.

So why are Canadians leavingthe country for treatment? Reasons may include Canada’s long wait times. In 2016, according to the Fraser Institute’s annual measurement of health-care wait times, patients waited 10.6 weeks for medically necessary treatment after seeing a specialist—almost four weeks longer than what physicians consider clinically “reasonable.”

According to study estimates, more patients (9,454) travelled abroad for general surgeries than any other treatment.

High numbers of Canadians also left the country for urology treatment (6,426), internal medicine procedures such as colonoscopies, gastroscopies andangiographies (5,095) and ophthalmology treatment (3,990).

Among physicians in Canada, otolaryngologists (which include ear, nose and throat specialists) reported the highest proportion (2.1 per cent) of patients travelling abroad for treatment, followed by neurosurgeons (1.9 per cent).Across Canada, physicians in British Columbia reported the highest proportion of patients (2.4per cent) leaving, while Ontario saw the largest number of patients (26,513)who left the country for treatment.

In fact, seven of 10 provinces saw an increase in the number of patients leaving the country for treatment, with only Newfoundland and Labrador, P.E.I. and New Brunswick experiencing a decline.

“Considering Canada’s long health-care wait times, which can result in increased suffering for patients and decreased quality of life, it’s not surprising that so many Canadians are travelling abroad for medical treatment,” Labriesaid."

Wednesday, June 28, 2017

The difference between the lived experiences of Americans at different income levels has actually been decreasing

See Your Neighbor's Fancy Car Should Make You Feel Better about Income Inequality by John Nye of Mercatus. Excerpt:
"Today while I was out running errands in my 5-year-old Honda Accord, I passed a Tesla. If I were a different kind of guy, seeing Elon Musk's latest creation whisk past me as I trundled along in my middleclassmobile might have inspired a sense of personal envy, or even some worry about the social implications of inequality in America.

But I'm an economist. And let's face it: In practical terms, the difference between a $200,000 Tesla and my last car, a beat-up minivan worth $2,000 at trade-in, is not all that large. They're both safe forms of transportation that get you from point A to point B and, given legal limits and the reality of suburban traffic, most of the time they're driven at roughly the same speeds.

In that sense, measures of income inequality overstate the differences within a developed country like the United States. The products available to the masses are, in many cases, nearly as good as those available only to the elite. Your garbageman's old Timex and your podiatrist's brand new Rolex serve almost precisely the same function.

It wasn't always so. A century ago, a hungry rich person had access to significantly more food and more choices than a poor one. Yet even bluebloods would have been able to get their hands on less variety and quality than one now finds at an average Midwestern all-you-can-eat buffet. When Herbert Hoover promised "a chicken in every pot" in the election of 1928, it was the sort of pledge that no one expected a politician to actually keep. Today, each American consumes an average of 27 chickens a year, and obesity is a bigger problem than hunger.

The chasm between the very rich and the median citizen yawns wider the further back you look. Three centuries ago, an aristocrat riding in a cushioned carriage would have looked down at a peasant trudging barefoot through the muck—a much more substantial difference than the Honda-Tesla gap today.

So why the 21st century panic about the gap between the rich and poor? At first glance, the numbers do look damning. Median family income has grown by about 20 percent since the 1970s, while income for those in the top 5 percent of households has grown by 75 percent or more, according to the Center for Budget and Policy Priorities. Economists Thomas Piketty and Emmanuel Saez looked at IRS data and concluded that the share of total pre-tax, pre-transfer income going to the top 1 percent has risen to levels not seen since the 1920s. That suggests an increase, not a decrease, in inequality.

But appearances can be deceiving. As the Brookings economist Gary Burtless has pointed out, if you account for transfers such as government housing assistance and employer-provided health insurance, "Americans in the bottom one-fifth of the distribution saw their real net incomes climb by almost 50 percent" since the late 1970s, while "those in the middle fifth of the distribution saw their incomes grow 36 percent." It's worth remembering that anytime someone says the gap between rich and poor is increasing, what he usually means is that rich people are getting richer faster than poor people are getting richer—not that any group is becoming worse off overall.

Meanwhile, the difference between the lived experiences of Americans at different income levels has actually been decreasing. Changes in the quality of goods consumed by almost everyone mean we're a whole lot more equal than the data superficially suggest.

What's more, the same behavior that sparks personal envy and political angst—splashing out on fancy apartments, rare jewels, and other truly scarce goods—may actually be a sign of the closing gap between rich and poor in practical terms. When everyone is wealthier, it becomes harder to demonstrate differences in wealth.

Better Off

Economic growth and the technological developments it fuels have been spectacularly effective at making incredible products cheap enough to be attainable for most families. As a result, Americans can routinely enjoy luxuries of the sort they once might have assumed they'd have to win the lottery to afford. The big-screen TV that a super-wealthy denizen of Beverly Hills might have bragged about 30 years ago can't be given away today on Craigslist; a low-end Android smartphone boasts many times more computing power than the best supercomputers available only to scientists in 1985. And while many Americans may never make it to Africa, considering the wealth of programming available from places like the National Geographic channel and Netflix, they hardly need to.

If you suddenly became a multi-millionaire, what would you do with the money? Hire a chauffeur? Eat better food? Wear custom-designed clothes? Many of those very outcomes could soon be available to us all, assuming robust enough economic and technological advancement.

Imagine a world where self-driving luxury cars cost so little that the average family thinks it's normal to buy a new one every year, where fast-casual joints sell the equivalent of cuisine now served only at five-star restaurants, and where bespoke suits are computer-fitted and delivered by drone for the cost of a cheap three-pack of T-shirts today. What's crazy about these possibilities is that they're just that: possible.
Economic growth and technological development can do much to change your material standard of living, and they have done much to reduce the disparities in people's material well-being in the developed world. The result is something that looks not like you coming into millions overnight but like almost everyone coming into millions."

Climate related deaths have been falling

See Natural Catastrophes by Max Roser of Our World in Data. Excerpt: 
"In the following two charts we explore global fatalities from natural catastrophes since 1900. In the first chart we report the total annual number of deaths from natural catastrophes, as the decadal average from 1900. In the second chart, we report the same data but as the annual rate of global deaths (measured per 100,000 of the world population). The data for both charts can be found in the tables presented here.
Annual global number of deaths from natural catastrophes per decade, 1900-20151
Annual global death rate (per 100,000) per decade from natural catastrophes, 1900-20152

The following time-series plot gives a global overview over the number of reported deaths, the reported monetary damage and the reported number of events. It shows that, as one would expect, more events were reported and the monetary damage in an ever richer world increased, but the number of reported deaths was greatly reduced – as seen in the previous graph.
Number of reported disasters and number of people reported as affected and reported as killed, 1900-2011 – EM-DAT3
Natural-disasters-1900-2011-–-Number-of-reported-disasters,-Number-of-people-reported-affected-and-reported-killed-–-EM-DAT"

Tuesday, June 27, 2017

Green Building Practices May Have Contributed to the Grenfell Fire

The local government put "sustainability" ahead of safety.

By Christian Britschgi of Reason.
"What caused the Grenfell Tower fire? An independent public inquiry into the blaze, which killed at least 79 people in London earlier this month, is slowly getting started, so we won't have a complete answer to that for a while. But one major culprit is already coming into view: a local government pushing "green energy" renovations at the expense of safety.

Preliminary analysis of why the fire spread so rapidly points to the flammable aluminum composite cladding that was installed during a recent renovation project. The renovation was undertaken by the Kensington-Chelsea Tenant Management Organization (KCTMO), the non-profit that managed the tower for the Kensington-Chelsea Borough Council. (The council does not merely contract with the KCTMO but selects a portion of its board.)

According to Jim Glocking, technical director of the Fire Protection Association, a U.K.-based safety organization, this flammable cladding is "often being introduced on the back of the sustainability agenda, but it's sometimes being done recklessly without due consideration to the consequences."

That's not just idle speculation. Documents from the Kensington-Chelsea Borough government and the KCTMO confirm that a "sustainability agenda" was directly behind the decision to install the material. The borough's 2013–2017 housing strategy invokes "the importance of seeking reasonable alterations to the existing building stock to mitigate the causes of and adapt to the effects likely to occur due to climate change," then announces that it "recently agreed to clad a high rise block in the north of the borough, which will improve the energy efficiency of all the properties within it."

That high-rise block was Grenfell.

In a 2013 statement about the renovation project, the KCTMO praised this "upgrade of the cladding" as a way to "greatly enhance the energy efficiency of the tower." The KCTMO repeated this rationale for the new cladding while announcing the selection of a contractor, and again upon completion of the renovations. Multiple tenant newsletters released during the renovations made it clear that the green-energy-concerned Borough Council was responsible for reviewing the cladding options and for making a final determination on which type to select.

This direct link between a "sustainability agenda" and the Grenfell fire has not stopped commentators from trying to cast the issue as one of state-shrinking austerity. A columnist in the Lahore Nation blamed "the desire for profit and accumulation" for the disaster, claiming that "the cladding used in Grenfell Tower was chosen precisely because it was cheaper than non-flammable alternatives." A Guardian writer took the authorities to task for "hands-off management, contracting out, and cost-cutting." Opposition leader Jeremy Corbyn declared that "if you cut local authority expenditure then the price is paid somehow."

This theory ignores the repeatedly stated reason for installing the flammable cladding. It also suggests that the Kensington-Chelsea Borough government endangered its tenants in order to save an estimated £5,000 on a £67 million renovation project, while that same government sat on a reported £200 million in cash reserves.

More broadly, critics have accused the Kensington-Chelsea Borough Council and the KCTMO of a mix of incompetence and indifference in how they manage their public housing properties. If even half of what is being said about them is true, that accusation is accurate. The worst example may be a hamfisted energy agenda that allowed a fire to turn so deadly, so quickly."

If Seattle’s $15 minimum wage experiment is the ‘canary in the coal mine,’ other cities should proceed with caution

From Mark Perry.
"In an important article in the Seattle Weekly, Daniel Person summarizes the situation in Seattle pretty well in the title of his exposé “The City Knew the Bad Minimum Wage Report Was Coming Out, So It Called Up Berkeley,” here’s a slice:
Two weeks. Two studies on minimum wage. Two very different results. Last week, a report out of the University of California—Berkeley found “Seattle’s minimum wage ordinance has raised wages for low-paid workers, without negatively affecting employment,” in the words of the Mayor’s Office. That report, produced by the Center on Wage and Employment Dynamics at Berkeley, was picked up far and wide as proof that thedoomsday scenarios predicted by skeptics of the plan were failing to materialize.
And while another study that came out Monday from researchers at the University of Washington (UW) doesn’t exactly spell doomsday either, it wasn’t exactly rosy. “UW study finds Seattle’s minimum wage is costing jobs,” read the Seattle Times headline Monday morning. The study found that while wages for low-earners rose by 3 percent since the law went into effect, hours for those works dropped by 9 percent. The average worker making less than $19 an hour in Seattle has seen a total loss of $125 a month since the law went into effect.
There’s an old joke that economics is the only field where two people can win the Nobel Prize for saying the exact opposite thing. However, by all appearances these two takeaways on Seattle’s historic minimum wage law are not a symptom of the vagaries of a social science but an object lesson in how quickly data can get weaponized in political debates like Seattle’s minimum wage fight. In short, the Mayor’s Office knew the unflattering UW report was coming out, and reached out to other researchers to kick the tires on what threatened to be a damaging report to a central achievement of Ed Murray’s tenure as mayor.
And here’s the key takeaway of what Person uncovered:
To review, the timeline seems to have gone like this: The UW shares with City Hall an early draft of its study showing the minimum wage law is hurting the workers it was meant to help; the mayor’s office shares the study with researchers known to be sympathetic toward minimum wage laws, asking for feedback; those researchers release a report that’s high on Seattle’s minimum wage law just a week before the negative report comes out.
In other words, if you don’t like an unflattering study from a team of researchers from the local university that accurately exposes some of the negative employment effects of the city of Seattle’s $15 minimum wage, you shop around – out of state in this case — for a more favorable study of that questionable and risky public policy experiment.

And what didn’t the Seattle mayor’s office like about the UW study? Let’s find out by looking at some of the key findings of the 63-page NBER study “Minimum Wage Increases, Wages, and Low-Wage Employment: Evidence from Seattle” by Ekaterina Jardim, Mark C. Long, Robert Plotnick, Emma van Inwegen, Jacob Vigdor and Hilary Wething (all six are professors in the Daniel J. Evans School of Public Policy and Governance at the University of Washington). The selected excerpts below help tell the story that the city of Seattle didn’t want to hear (emphasis added):
Abstract:
This paper evaluates the wage, employment, and hours effects of the first and second phase-in of the Seattle Minimum Wage Ordinance, which raised the minimum wage from $9.47 to $11 per hour in 2015 and to $13 per hour in 2016. Using a variety of methods to analyze employment in all sectors paying below a specified real hourly rate, we conclude that the second wage increase to $13 reduced hours worked in low-wage jobs by around 9 percent, while hourly wages in such jobs increased by around 3 percent. Consequently, total payroll fell for such jobs, implying that the minimum wage ordinance lowered low-wage employees’ earnings by an average of $125 per month in 2016.
Conclusion:
Our preferred estimates suggest that the Seattle Minimum Wage Ordinance caused hours worked by low-skilled workers (i.e., those earning under $19 per hour) to fall by 9.4% during the three quarters when the minimum wage was $13 per hour, resulting in a loss of 3.5 million hours worked per calendar quarter. Alternative estimates show the number of low-wage jobs declined by 6.8%, which represents a loss of more than 5,000 jobs. These estimates are robust to cutoffs other than $19. A 3.1% increase in wages in jobs that paid less than $19 coupled with a 9.4% loss in hours yields a labor demand elasticity of roughly -3.0, and this large elasticity estimate is robust to other cutoffs.
These results suggest a fundamental rethinking of the nature of low-wage work. Prior elasticity estimates in the range from zero to -0.2 suggest there are few suitable substitutes for low-wage employees, that firms faced with labor cost increases have little option but to raise their wage bill. Seattle data show that payroll expenses on workers earning under $19 per hour either rose minimally or fell as the minimum wage increased from $9.47 to $13 in just over nine months. An elasticity of -3.0 suggests that low-wage labor is a more substitutable, expendable factor of production. The work of least-paid workers might be performed more efficiently by more skilled and experienced workers commanding a substantially higher wage. This work could, in some circumstances, be automated. In other circumstances, employers may conclude that the work of least-paid workers need not be done at all.
Importantly, the lost income associated with the hours reductions exceeds the gain associated with the net wage increase of 3.1%. Using data in Table 3, we compute that the average low-wage employee was paid $1,897 per month. The reduction in hours would cost the average employee $179 per month, while the wage increase would recoup only $54 of this loss, leaving a net loss of $125 per month (6.6%), which is sizable for a low-wage worker.
MP: Here’s one thing the UW study didn’t consider yet, because it’s too early: The additional $2 an hour increase in the city’s minimum wage that just took effect on January 1 of this year from $13 to $15 an hour for large employers. Once local employers feel the full effect of the 58% increase in labor costs for minimum wage workers from $9.47 to $15 an hour  in less than two years, it’s likely the negative employment effects uncovered by the UW team for 2016 will continue this year and into the future, and could likely increase.

Here’s some additional commentary on the developing Seattle minimum wage story:
1. The Seattle Times Editorial Board warns that “Seattle should open its eyes to minimum-wage research.”
Murray’s office said it had concerns about the “methodology” of the UW study. But the strategy is clear and galling: celebrate the research that fits your political agenda, and tear down the research that doesn’t.
The minimum-wage experiment sweeping the country needs good, thorough, independent research. Seattle led this movement, passing the highest local minimum wage in the country. Does City Hall really want to know the consequences, or does it want to put blinders on and pat itself on the back?
2. Forbes contributor Tim Worstall writes today that “As I Predicted, Seattle’s Minimum Wage Rise Is Reducing Employment.”

3. Max writes in today’s Washington Post that “A ‘very credible’ new study on Seattle’s $15 minimum wage has bad news for liberals.

4. Ben Casselman and Kathryn Casteel express their concerns in FiveThirtyEight that “Seattle’s Minimum Wage Hike May Have Gone Too Far.” Here’s a slice:
In January 2016, Seattle’s minimum wage jumped from $11 an hour to $13 for large employers, the second big increase in less than a year. New research released Monday by a team of economists at the University of Washington suggests the wage hike may have come at a significant cost: The increase led to steep declines in employment for low-wage workers, and a drop in hours for those who kept their jobs. Crucially, the negative impact of lost jobs and hours more than offset the benefits of higher wages — on average, low-wage workers earned $125 per month less because of the higher wage, a small but significant decline.
“The goal of this policy was to deliver higher incomes to people who were struggling to make ends meet in the city,” said Jacob Vigdor, a University of Washington economist who was one of the study’s authors. “You’ve got to watch out because at some point you run the risk of harming the people you set out to help.”
“This is a ‘canary in the coal mine’ moment,” said David Autor, an MIT economist who wasn’t involved in the Seattle research. Autor noted that high-cost cities such as Seattle are the places that should be in the best position to absorb the impact of a high minimum wage. So if the policy is hurting workers there — and Autor stressed that the Washington report is just one study — that could signal trouble as the recent wage hikes take effect in lower-cost parts of the country.
“Nobody in their right mind would say that raising the minimum wage to $25 an hour would have no effect on employment,” Autor said. “The question is where is the point where it becomes relevant. And apparently in Seattle, it’s around $13.”
Bottom Line: If booming high cost-of-living Seattle had a hard time absorbing a $13 an hour minimum wage last year without experiencing negative employment effects (reduced hours, jobs and earnings for low-wage workers), it will have an even more difficult time dealing with the additional $2 an hour increase that took place on January 1 without even greater negative consequences. And if Seattle’s risky experiment with a $15 an hour minimum wage represents the “canary in the coal mine” for cities around the country that want to increase their minimum wages to $15 an hour, those cities may want to hold off for a few years to get a final count of the “dead canaries” in Seattle before proceeding."

Monday, June 26, 2017

Scientists Sharply Rebut Influential Renewable-Energy Plan

Nearly two dozen researchers critique a proposal for wind, solar, and water power gaining traction in policy circles.

By James Temple of the MIT Technology Review.
"On Monday, a team of prominent researchers sharply critiqued an influential paper arguing that wind, solar, and hydroelectric power could affordably meet most of the nation’s energy needs by 2055, saying it contained modeling errors and implausible assumptions that could distort public policy and spending decisions (see “Fifty-States Plan Charts a Path Away from Fossil Fuels”).

The rebuttal appeared in the Proceedings of the National Academy of Sciences, the same journal that ran the original 2015 paper. Several of the nearly two dozen researchers say they were driven to act because the original authors declined to publish what they viewed as necessary corrections, and the findings were influencing state and federal policy proposals.

The fear is that legislation will mandate goals that can’t be achieved with available technologies at reasonable prices, leading to “wildly unrealistic expectations” and “massive misallocation of resources,” says David Victor, an energy policy researcher at the University of California, San Diego, and coauthor of the critique. “That is both harmful to the economy, and creates the seeds of a backlash.”

The authors of the earlier paper published an accompanying response that disputed the piece point by point. In an interview with MIT Technology Review, lead author Mark Jacobson, a professor of civil and environmental engineering at Stanford, said the rebuttal doesn’t accurately portray their research. He says the authors were motivated by allegiance to energy technologies that the 2015 paper excluded.

“They’re either nuclear advocates or carbon sequestration advocates or fossil-fuels advocates,” Jacobson says. “They don’t like the fact that we’re getting a lot of attention, so they’re trying to diminish our work.”

In the original paper, Jacobson and his coauthors heralded a “low-cost solution to the grid reliability problem.” It concluded that U.S. energy systems could convert almost entirely to wind, solar, and hydroelectric sources by, among other things, tightly integrating regional electricity grids and relying heavily on storage sources like hydrogen and underground thermal systems. Moreover, the paper argued, the system could be achieved without the use of natural gas, nuclear power, biofuels, and stationary batteries.

But among other criticisms, the rebuttal released Monday argues that Jacobson and his coauthors dramatically miscalculated the amount of hydroelectric power available and seriously underestimated the cost of installing and integrating large-scale underground thermal energy storage systems. “They do bizarre things,” says Daniel Kammen, director of the Renewable and Appropriate Energy Laboratory at the University of California, Berkeley, and coauthor of the rebuttal. “They treat U.S. hydropower as an entirely fungible resource. Like the amount [of power] coming from a river in Washington state is available in Georgia, instantaneously.”

In an e-mail, Jacobson stood firm on every conclusion in the original article: “There is not a single error in our paper.”

Other models, including Kammen’s, do show that the U.S. can transition to nearly 100 percent zero-emissions energy technologies. But the established view among energy researchers is that it would require making use of nearly every major technology available and that the transition, particularly getting the last 20 percent or so of the way there, would be prohibitively expensive using existing technologies. One of the key missing pieces is affordable grid-scale storage that can efficiently power vast areas for extended periods when wind and solar sources aren’t available (see “Why Bad Things Happen to Clean-Energy Startups”).

Various political and advocacy figures have embraced Jacobson’s ideas. Prior to the 2015 paper, he published a 50-state plan for moving to 100 percent renewables by midcentury, which he says contributed to decisions by both New York and California to enact laws requiring 50 percent renewable energy sources by 2030.

He also cofounded a clean-energy advocacy group, the Solutions Project, whose board members include actor and activist Mark Ruffalo and commentator Van Jones. In late April, Senator Bernie Sanders co-wrote an op-ed with Jacobson in the Guardian, highlighting the 50-state research and trumpeting a bill proposed that week that would move the United States to 100 percent clean energy by 2050.

The authors of Monday’s rebuttal were quick to stress that cutting emissions as quickly as possible is a crucial goal. The concern is that paths for getting there will be wrong if they’re based on incorrect assumptions or miscalculations. Among other things, it can skew the public debate by suggesting it’s merely a question of marshaling political will, rather than achieving difficult technological breakthroughs and substantial cost reductions.

That could lead to spending public resources on the wrong technologies, underestimating the research and development still required, or abandoning sources that might ultimately be necessary to reach the stated goals.

Notably, there is growing fear that accelerating retirement schedules for the U.S. fleet of nuclear plants will make it increasingly difficult to make the transition to clean energy. While some interest groups remain opposed to the technology, many researchers believe it should be a crucial part of the energy mix, since it’s the only major zero-emissions source that doesn’t suffer from the intermittency issues plaguing solar and wind.
“Energy issues are complex and hard to understand, and Mark’s simple solution attracts many who really have no way to understand the complexity,” Jane Long, another coauthor and former associate director at Lawrence Livermore National Laboratory, said in an e-mail. “It’s consequently important to call him out.”

Lead author Christopher Clack, chief executive of Vibrant Clean Energy and a former NOAA researcher, described Jacobson’s accusation that the authors were acting out of allegiance to fossil fuels or nuclear power as “bizarre.” The 21 authors of the rebuttal, which features a conflict-of-interest statement, include energy, policy, storage, and climate researchers affiliated with prominent institutions like Carnegie Mellon, the Carnegie Institution for Science, the Brookings Institution, and Jacobson’s own Stanford.

Clack says he was motivated to oversee the additional peer-review process because he believed the earlier conclusions were wrong, and the authors refused to correct them. He added that the process took more than a year and went through two reviews by the journal’s editorial board."

“We stayed the course because we believe it, and want the truth out there,” he says."

Amazon is about to buy Whole Foods. Is it time to panic?

By Aeon J. Skoble. He is Professor of Philosophy at Bridgewater State University.
"Does the thought of Amazon buying Whole Foods make you speculate about the demise of competing grocers and fume about the online retailer’s plans for world domination? Or does it get you excited about a new era of affordable and convenient high-end groceries?

On June 16, 2017, Amazon announced that it intends to purchase Whole Foods. By June 21, the New York Times had published an op-ed denouncing the acquisition. Evoking images of colluding railroad barons, scholar Lina Khan argues that antitrust officials should stop the acquisition.

She notes that Whole Foods represents less than 5% of the grocery market, which would hardly seem worth a trust-buster’s time and energy, but “antitrust officials would be naïve to view this deal as simply about groceries. Buying Whole Foods will enable Amazon to leverage and amplify the extraordinary power it enjoys in online markets and delivery, making an even greater share of commerce part of its fief,” Khan writes.

In other words, the proposed acquisition is bad because it would make a strong company stronger. This perspective, and the language used to support it, is riddled with misconceptions.
Khan uses the word “fief,” which connotes the feudal system. Feudal barons got their holdings and their power by royal decree, enforced by soldiers. This is the sort of arrangement that classical liberalism emerged to oppose.

But Amazon is hardly a fiefdom.

A moment’s reflection shows why this is terrible comparison: Amazon’s “power,” to whatever extent it has any in a nonmetaphorical way, didn’t come from royal grants — it came from serving millions of customers. The language used to talk about this and other antitrust cases gets traction by evoking unjust power relations, but this comparison is generally unwarranted.

Khan talks about Amazon “capturing” and “controlling” markets as if it were invading peninsulas and occupying mountain passes. But nothing nefarious has occurred: Amazon is huge now, but it’s not even 25 years old. It grew from literally nothing to what it is today because it has served millions of people who continue to buy its many goods and services.

As a general principle, all commerce benefits both the buyer and the seller. The buyer gets something of value and the seller profits. As a seller, there are basically three ways things can work out: you can fail to provide sufficient value for buyers and go out of business; you can provide just-good-enough service to stay in business; or you can be very successful at providing value, leading to increased profitability and growth of the company.

Amazon provides tremendous value to consumers.

That Amazon is an example of the latter is not evidence of wrongdoing. If I get wealthy by stealing, I have done something unjust. If I get wealthy by discovering new markets and providing value for millions of people, that’s win-win.

I’m partly responsible for Amazon’s billions: I buy its books, movies, music, and electronics, for myself and as gifts for others. Shopping online doesn’t work for me in select cases, but for most things, it does, so I get tremendous value from Amazon.

It’s not just the fact of shopping online — I have dealt with online retailers who ship very slowly, who have unresponsive customer service, who have unfriendly return policies. Amazon doesn’t do any of that. So I get convenience, selection, value, good service, and peace of mind — and so do literally millions of other people. That’s why Amazon is a large and growing business.

Is Amazon the bogeyman?

Why Amazon’s position should frighten me is unclear. Scare tactics like “but they control the market” don’t work for me — for one thing, Amazon doesn’t control the market. If I can get a better price or faster service at Home Depot or Best Buy, I’ll shop there instead. Antitrust warriors use a bait-and-switch: they encourage you to be worried about “monopolies,” because a monopoly would have you at its mercy. But then they redefine “monopoly” to mean “any company that has a large market share.”
What’s more, this rhetoric overlooks the fact that there could be very good, customer-centric reasons why certain companies have such a large market share. Khan frets that Amazon has 74% of the e-book market. My initial response is, “So what?” That means the other 26% of e-books are being sold by competitors, which means Amazon does not hold a monopoly. But further, this criticism overlooks the fact that Amazon is largely responsible for there even being an e-book industry. And why do so many people have Kindles? Because they like them.

So now, a tiny additional percentage of people might be buying groceries from Amazon — but only if they want to. If Amazon starts using force to make people buy groceries online, I’ll be the first to denounce it. Using force to require or prohibit transactions is the opposite of what classical liberalism is all about. Demanding that federal regulators use force to prohibit Amazon from acquiring Whole Foods is illiberal and unwarranted."

Sunday, June 25, 2017

Medicaid Scare Tactics Are Irresponsible

By Charles Blahous of Mercatus.
"If we want to make headway on improving public policy discourse, a good place to start might be with how we’re debating Medicaid policy, in particular how it might be affected by pending legislation to repeal and replace the Affordable Care Act (ACA), including legislation presented on Thursday by Senate Republicans.

Medicaid has long been on an unsustainable cost growth trajectory.  This was true long before the ACA was passed in 2010, though the ACA exacerbated the problem.  Annual federal Medicaid spending is currently projected (see Figure 1) to grow from $389 billion in 2017 to $650 billion in 2027.  The biggest problem with that growth rate is that it’s faster than what’s projected for our economy as a whole.  As with Social Security and Medicare, Medicaid costs are growing faster than our ability to finance them.
Figure 1
           
Medicaid serves a sympathetic low-income population.  This purpose, however, does not lessen the necessity of placing the program on a financially sustainable course.  Nor does it eliminate lawmakers’ obligation to prioritize how Medicaid dollars are best spent; to the contrary, it magnifies it.  Lawmakers face the conflicting pressures of targeting Medicaid resources to where they are most needed, while also limiting aggregate spending growth to a sustainable level.

This situation creates irresistible political opportunities for those inclined to exploit them.  Whenever lawmakers take on the unenviable job of moderating cost growth to sustainable rates, these can be and are described as heartless “cuts” relative to existing law – even though existing Medicaid law cannot be maintained indefinitely. This creates a Catch-22; the existence of an untenable Medicaid cost growth baseline both mandates responsible action to repair it, while also establishing a warped basis for comparison that amplifies the political hazards of doing so.

We have seen this dynamic operate with full force in the recent public debate over efforts to repeal and replace the ACA, including its Medicaid provisions.  Countless editorials and news articles have portrayed an intent by Congress to “gut” Medicaid to pay for “tax cuts for the rich.”  This intensifying drumbeat has led to disturbing vitriol and threats against legislators, based on gross mischaracterizations of the implications of pending legislation.  Consider for example an op-ed recently published in the New York Times:

“Imagine your mother needs to move into a nursing home. It’s going to cost her almost $100,000 a year. Very few people have private insurance to cover this. Your mother will most likely run out her savings until she qualifies for Medicaid. . . Many American voters think Medicaid is only for low-income adults and their children — for people who aren’t “like them.” But Medicaid is not “somebody else’s” insurance. It is insurance for all of our mothers and fathers and, eventually, for ourselves. The American Health Care Act that passed the House and is now being debated by the Senate would reduce spending on Medicaid by over $800 billion, the largest single reduction in a social insurance program in our nation’s history. . . . Many nursing homes would stop admitting Medicaid recipients and those who don’t have enough assets to ensure that they won’t eventually end up on Medicaid. Older and disabled Medicaid beneficiaries can’t pay out of pocket for services and they do not typically have family members able to care for them. The nursing home is a last resort. Where will they go instead? . . . Draconian cuts to Medicaid affect all of our families. They are a direct attack on our elderly, our disabled and our dignity.”

Most anyone reading such an editorial would come away with the fear that pending legislation would threaten the access of the elderly and disabled to Medicaid services.  It wouldn’t.  The elderly and the disabled who were eligible for Medicaid prior to the ACA would remain eligible after its proposed repeal.  The ACA’s Medicaid expansion population involved childless adults under the age of 65, a different category of beneficiaries altogether.

The large projected expenditure reduction under the AHCA (the House’s repeal-and-replace bill) actually has nothing to do with disabled or elderly Medicaid beneficiaries but rather with changes in projected enrollment for the ACA’s expansion population.  Doug Badger estimated in a recent paper that 82% of the Medicaid savings projected for the AHCA by CBO arose from changes to projected enrollment patterns – not from anything that would undermine care for the person profiled in the Times op-ed.  The story is likely to be quite similar under the recently-unveiled Senate bill.

The Chief CMS Actuary recently weighed in with its own estimate of 10-year cost savings of $383 billion over ten years from the House bill’s Medicaid provisions – less than half the savings projected by CBO.  A primary difference between the two estimates has to do with what CMS and CBO respectively believe would happen if the ACA remained on the books.  CMS projects that under a continuation of the ACA, the proportion of the potentially newly-Medicaid-eligible population living in Medicaid-expansion states would remain at its current 55 percent.  CBO by contrast assumes that additional states would expand Medicaid if the ACA remained law.  CBO further assumes that many fewer people will participate in Medicaid if the ACA is repealed, even if they remain fully eligible to do so.  The bottom line is that the essential difference between these two assumptions has nothing to do with people now on Medicaid losing their access to coverage.

It is fair to be concerned that fewer people would receive Medicaid coverage in the future under pending legislation than under the ACA. However, current projections bear no resemblance to a picture in which people historically dependent on Medicaid would lose their benefits.  To the contrary, CMS estimates (see Figure 2) that Medicaid enrollment would stay roughly constant at current levels under the AHCA, while still being substantially higher than projected before the ACA was passed.  Indeed, CMS finds that many states would still cover some of the ACA expansion population even if lawmakers do away with the ACA’s inflated federal matching payment rates.  This would mean expanded coverage relative to pre-ACA levels, while also being more equitable than the ACA.
Figure 2
It is also fair to wonder about the long-term effects of per-capita growth caps proposed under both the AHCA and the Senate bill – though not relative to unsustainable promises under current law, but rather to an alternative method of attaining financial sustainability.  But no one should associate figures such as $800 billion in cuts with these proposed caps. As previously described, most of CBO’s projected cost reduction is unrelated to the concept, while CMS’s estimate of the caps’ budgetary effects is well less than 10% of that amount.

It is perfectly appropriate for there to be a vigorous, even impassioned debate about whose proposals would provide the best way forward for the Medicaid program.  But we ill serve the public with misleading, incendiary rhetoric about vulnerable elderly being ejected from nursing homes so that cruel politicians can provide tax cuts to the rich, when nothing under consideration can be fairly described as doing any such thing.  If advocates want their health policy arguments to be taken seriously, and to usefully inform the American public, groundless hyperbole should be shelved in favor of a focus on what existing proposals would actually do."

This product is also yet another example of how the environment is cleaned by capitalism (cheap water filter)

See Cleaned by Capitalism XXXIX by Don Boudreaux.
"Available from Amazon.com for $14.99 (and from other retailers at a similar price) is this handy device that filters and decontaminates water whenever someone uses it as a straw.  Lifestraw removes 99.9999% of waterborne bacteria and 99.9% of waterborne protozoa.  Each Lifestraw filters the amount of water that the typical person drinks in the course of a year.  Made (I think in Poland) by the Swiss company Vestergaard, Lifestraw – from its conception to the system that allows it to be produced and distributed and sold at a price that’s about 2/3rds of the amount of money that an ordinary American worker earns in a single hour – is a marvelous example of human ingenuity and of the largely unseen and under-appreciated productive power of a globe-spanning market.  This product is also yet another example of how the environment is cleaned by capitalism.  And since Lifestraw became available, the process of reducing water pollution is a bit less of a public good than it was before the availability of Lifestraw.

I thank Warren Smith for sending me this article about Lifestraw."

Saturday, June 24, 2017

How Financial Regulations Can Create Barriers to Entry: The Case of Cumplo in Chile

A new Stigler Center case study chronicling the story of Chile’s first crowdfunding platform and its early regulatory challenges illustrates how financial regulations can be effectively used by incumbents to stifle competition

By Asher Schechter of the Pro Market blog. Excerpts:
"In June 2012, the founders of Cumplo, Chile’s first crowdfunding platform, were called to a meeting in the office of the country’s banking regulator. They were given an ultimatum: “If you don’t stop doing what you’re doing in 48 hours,” the regulator told them, “I will be forced to report your activity and you may end up spending 541 days in jail.”

Cumplo, said the regulator—a former manager at one of Chile’s biggest banks—was in violation of the country’s banking law, which prohibits any unlicensed individual or organization from keeping deposits or acting as a financial intermediary. The law was originally passed in the early 1980s, during Chile’s banking crisis, to promote financial stability.

Five days later, the regulatory agency in charge of supervising banks in Chile (SBIF) officially charged Cumplo. The company, founded in 2011 by Nicolas Shea and his wife Josefa Monge, countered that it was a mere peer-to-peer (P2P) lending platform, a marketplace that allows borrowers and lenders to connect, borrow and lend among each other directly. 

The regulator insisted that Cumplo was operating illegally as a “money broker.” Shortly thereafter, six armed police agents raided the company’s offices, looking for secret hard drives and files. According to Shea, he improvised a role-playing session to show that Cumplo was not a financial firm. One week later, he says, one of the officers came back to Cumplo’s office to try to renegotiate his retail store loans. Days later, Shea and co-founder Jean Boudeguer were interrogated by the district attorney in the presence of police officers."

"The Chilean banking industry is highly concentrated, with the three largest banks—Santander Chile, Banco de Chile, and BCI—accounting for 50 percent of loans and nearly two-thirds of the profits (as of 2015).

Following Chile’s banking crisis of the early 1980s, the government took control of many of the nation’s banks. These banks ended up with major “subordinated debts” to the central bank, which banks paid annually as a percentage of their profits. These debts, according to the Chilean economist Manuel Cruzat Valdés, created a strong incentive for the government to keep the banking system a “closed club.”

“Authorities disregarded competition for the sake of a Central Bank debt collection, but in the process they concentrated the allocation of capital into a small but powerful group, with negative consequences on competition levels in the credit sector and, by consequence, all over the economy. Credit from banks was—and is—dominant in total credit allocation, as opposed to the U.S., where capital markets effectively allowed credit alternatives to those coming from banks. The end result was extremely high levels of concentration in almost all economic sectors, cross shareholding practices, and interlocking, to say the least. Collusive practices were just a natural but more extreme consequence of this process. However, much more important and damaging because of its massiveness, was a dormant competitive environment born out of these conditions,” says Valdés."

"Other P2P lending platforms around the world, like Prosper and the Brazilian Fairplace, have faced strong regulatory and legal issues, and Cumplo was no exception. But the response Cumplo faced in Chile was tougher than what other P2P platforms have had to face. 

Expecting a harsh response from the banking industry, Shea consulted a couple of prominent banking lawyers before starting Cumplo. “You are insane. This can’t be legal and if it were, banks will smash you in a heartbeat,” one lawyer friend told him in 2011. The lawyers, he says, understood that Cumplo did not take deposits and that technically there could not be financial intermediation, but they realized it was a fine line. “We would need to go in further, but from what you tell me, you are the marketplace, not the intermediary, so it is not illegal,” another lawyer told him at the time.

Before launching, Shea met with Chile’s then minister of the economy, Pablo Longueira, who as a senator had spent years on the financial committee. His estimated that Cumplo should not consult financial regulators, since its operation doesn’t take deposits or invests money. Later on, Cumplo consulted Victor Vial, former general counsel of Chile’s Central Bank and a top banking lawyer, for a formal legal opinion. According to Shea and Monge, Vial praised Cumplo’s business model, concluding that “If there is any intermediation in Cumplo, it’s of people, not of money.”

Cumplo was started in August 2011 and its first loan was financed in March 2012. Initially, Cumplo found some success. In its first nine months, lenders on the platform provided roughly $87,000 in loans. The growth of the site attracted the attention of the press, and the article in La Segunda that appeared in May 2012 made the small start-up seem like a potential threat to the banking industry. Subsequently, the regulator charged Cumplo with violating the banking law.

“The banking regulator called [Monge and Boudeguer] up to his office in June 2012. He told them ‘Kids (cabros), I’m glad you came. I wanted to make sure that you understood what is going on here, because what you are doing is illegal and if you keep doing it I will press criminal charges against you,” says Shea. “After explaining what we did and asking him what was wrong about it he said that he didn’t really care to understand. All he said after he couldn’t explain our wrongdoing was ‘I’m not a lawyer, so I can’t go into technicalities. All I know is that I got notice from the general counsel of [an incumbent bank] that what you are doing is illegal and if you don’t stop doing it within the next 48 hours, I will start a criminal investigation against you personally and you will risk 541 days in jail.”’

Hoping the troubles would go away, Cumplo tried to appease the regulators. In meetings with regulators, Cumplo executives were told that the main concern was the platform’s use of virtual accounts. Cumplo did not hold deposits, but it did have virtual accounts in which lenders’ funds were kept as collateral. As a gesture to regulators, the company modified its platform and removed virtual accounts, hoping the situation would then be rectified, but to no avail: a criminal investigation ensued. In July 2012, policemen raided their offices. Days later, Shea and others were interrogated. “‘Let me give you some advice, kid,’” Shea was told by a senior industry representative around that time. “‘Your business is too dangerous and complicated. You should forget about it.’”"

"Cumplo’s case also received considerable media attention, both from domestic and international outlets like The Economist, which criticized Chile’s government for putting the company “through regulatory hell.” The media attention eventually allowed Cumplo to fend off the initial attacks."

"In its current iteration, Cumplo has begun to find some success. The company has recently reached the threshold of $10 million loans financed per month and should reach $200 million this year, according to Shea. After five years of operation, last month the company has finally reached break-even. The average loan, he says, is around $37,000 and is financed by 11 investors.

Shea, who briefly attempted to run for Chile’s presidency earlier this year, says he is optimistic about the future of Cumplo, but while the company has found some success in the SME market, its regulatory problems are not over. Nevertheless, its early struggles point not only to the troubles that many other P2P lending platforms face regarding the feasibility of their models, but also to the way regulation can be effectively used by incumbents to stifle competition."

More School Choice, Less Crime

By Corey A. DeAngelis of Cato.
"One of the original arguments for educating children in traditional public schools is that they are necessary for a stable democratic society. Indeed, an English parliamentary spokesman, W.A. Roebuck, argued that mass government education would improve national stability through a reduction in crime.

Public education advocates, such as Stand for Children’s Jonah Edelman and the American Federation for Teachers’ Randi Weingarten, still insist that children must be forced to attend government schools in order to preserve democratic values.

Theory

In principle, if families make schooling selections based purely on self-interest, they may harm others in society. For instance, parents may send their children to schools that only shape academic skills. As a result, children could miss out on imperative moral education and harm others in society through a higher proclivity for committing crimes in the future.

However, since families value the character of their children, they are likely to make schooling decisions based on institutions’ abilities to shape socially desirable skills such as morality and citizenship. Further, since school choice programs increase competitive pressures, we should expect the quality of character education to increase in the market for schooling. An increase in the quality of character education decreases the likelihood of criminal activity and therefore improves social order.

Evidence

There are only three studies causally linking school choice programs to criminal activity. Two studies examine the impacts of charter schools and one looks at the private school voucher program in Milwaukee. Each study finds that access to a school choice program substantially reduces the likelihood that a student will commit criminal activity later on in life.

Notably, Dobbie & Fryer (2015) find that winning a random lottery to attend a charter school in Harlem completely eliminates the likelihood of incarceration for males. In addition, they find that female charter school lottery winners are less than half as likely to report having a teen pregnancy.



Note: A box highlighted in green indicates that the study found statistically significant crime reduction.

According to the only causal studies that we have on the subject, school choice programs improve social order through substantial crime reduction. If public education advocates want to continue to cling to the idea that traditional public schools are necessary for democracy, they ought to explain why the scientific evidence suggests the opposite.

Of course, these impacts play a significant role in shaping the lives of individual children. Perhaps more importantly, these findings indicate that voluntary schooling selections can create noteworthy benefits for third parties as well. If we truly wish to live in a safe and stable democratic society, we ought to allow parents to select the schooling institutions that best shape the citizenship skills of their own children."

Friday, June 23, 2017

The Problems With A New Study On Seattle's $15 Minimum Wage

By Michael Saltsman in Forbes.
"The headlines were ebullient: "Minimum Wage Increase Hasn't Killed Jobs in Seattle." So said a report from a team of researchers affiliated with the University of California-Berkeley, timed for the three-year anniversary of the law.

Seattle Mayor Ed Murray conveniently had an infographic designed and ready to go for the study's release. His office excitedly tweeted that the policy had "raised food workers' pay, without negative impact on employment," linking to an uploaded study version on the Mayor's personal .gov website rather than a University domain.

The Mayor's enthusiasm was understandable: The report "was prepared at the request of the Mayor of Seattle," according to the authors--apparently as a public relations prop. Less clear is why the study was done in the first place.

The City of Seattle was already funding a highly-qualified, unbiased research team at the University of Washington to do such a report. The team includes a roster of impressive researchers from a wide variety of backgrounds--ranging from Jacob Vigdor, who is a professor at the University and an adjunct fellow at the Manhattan Institute, to Hillary Wething, who was formerly at the union-backed Economic Policy Institute.

The UW reports on Seattle's $15 experiment had something for everyone. Unfortunately for the Mayor's office, their conclusions on the early stages of Seattle's $15 wage experiment were not uniformly positive. The Washington Post reports:

    The average hourly wage for workers affected by the increase jumped from $9.96 to $11.14, but wages likely would have increased some anyway due to Seattle's overall economy. Meanwhile, although workers were earning more, fewer of them had a job than would have without an increase. Those who did work had fewer hours than they would have without the wage hike.

Nuanced conclusions like this one don't lend themselves to celebratory press releases like the one the Mayor's office put out yesterday. Enter the Berkeley team, which always arrives at the same positive conclusion on minimum wage no matter the number:

In their view, a higher minimum wage is always a good thing.

In an expose published last year, the Albany Times-Union used emails obtained via public records request to explore the motivations of the Berkeley team:

    The Times Union was recently provided hundreds of pages of emails among minimum wage advocates, Jacobs and other Berkeley academics, demonstrating a deep level of coordination between academics and advocates....

    The Berkeley Labor Center has done at least six other studies on the minimum wage in California municipalities, all showing that a wage increase would be beneficial. In fact, Jacobs could not name a study conducted by Berkeley that said raising wages would have an overall negative impact. ...

Given this history of identical results, it's not surprising that the Murray administration in Seattle was anxious to have a copy of the predictably-positive Berkeley report to tout on the third anniversary of its minimum wage law.

Yet the anecdotal evidence in Seattle backs up the empirical data provided by the UW team. Local restaurant owners of establishments such as Louisa's Cafe and z Pizza have shut down, citing the cost of the minimum wage law as a factor. Karam Mann, a franchisee who owns a Subway location with his wife Heidi, has cut his staffing levels from seven employees down to three.

The wage floor is still rising in Seattle, and there are more chapters to write on the city's minimum wage experiment. But if accuracy if the goal, the Berkeley team is not the right choice to author them." 

The New York restaurant industry is slowing down, adding fewer jobs and shedding eateries amidst recent hikes in the minimum wage

Restaurant workers feeling the pinch in New York by Lisa Fickenscher of the NY Post. 
"The New York restaurant industry is slowing down, adding fewer jobs and shedding eateries amidst recent hikes in the minimum wage.

The Empire State lost 1,000 restaurants last year and the number of jobs as cooks, servers and dishwashers grew by an anemic 1.4 percent. That’s a far cry from the 4.4 percent annual growth the state’s eateries enjoyed from 2010 to 2015, according to the Employment Policies Institute, a nonprofit research group.

The Big Apple accounts for the lion’s share of the state’s growth — and the slowdowns in the city are more dramatic.

Employment growth at fast-food restaurants in the city — which are required to pay $12 an hour, or $1 more than other employers — shriveled to 3.4 percent last year compared with 7 percent growth from 2010 to 2015. The spiral has continued into 2017, which has generated just 2 percent growth through May.

Full-service restaurants in the city are adding even fewer jobs, with growth at just 1.3 percent last year compared to 6.5 percent over the previous five years. This year it’s down to 1.2 percent through May.
“This is a drop-off in restaurant growth that didn’t even show up during the great recession,” said Michael Saltsman, managing director of the Employment Policies Institute. “It’s compelling evidence that something big is going on.”

Some economists point to a rise in pay that began in 2016 when the state began implementing a series of minimum wage increases that will bring the hourly rate to $15 by 2019 for some employers in the city and more gradually in other parts of the state.

On Dec. 31, 2015, the minimum wage for tipped restaurant employees rose by 50 percent, from $5 an hour to $7.50 an hour. For fast food workers, it rose by as much as 20 percent, from $8.75 to $10.50 depending on business size and location.

And on Dec. 31, the minimum wage for fast food employees rose as high as $12 in New York City, Saltsman notes. Meanwhile, the statewide minimum wage rose to between $9.70 and $11 an hour for non-fast food, non-tipped employees.

“It’s a miserable business at the moment,” said Andrew Schnipper, who owns five burger joints in Manhattan called Schnipper’s Quality Kitchen. “Most restaurateurs are far less profitable than they were a year ago.”

Other experts point to high rents and oversaturation in the foodie capital of the world where nearly every growing restaurant chain wants to plant a flag and become the next Shake Shack.

Restaurant employment across the country has been slowing down this year, but not by the same steep declines as in New York, say experts.

“It’s not unusual for growth like that to be sustained forever,” said James Parrott, an expert on city and state economics, who recently left the Fiscal Policy Institute. “Restaurant employment [in New York] overall is still increasing and average wages grew about 6 percent in 2016.”
But at what cost, ask some restaurateurs.

Schnipper’s, for example, has 10 percent fewer employees than it did a year ago and many of its current workers have reduced hours. The chain raised its menu prices by up to 4 percent last year and is planning another hike this summer and another one in January when the minimum wage in the city rises to $13. Meanwhile, sales have slipped this year.

Still, “It’s hard to know whether customers are scared off by higher prices,” Schnipper said."

Thursday, June 22, 2017

Ed Glaeser makes the case for housing deregulation

See Build, Baby, Build by Bryan Caplan of EconLog.
"Ed Glaeser makes the case for housing deregulation for Brookings:
Housing advocates often discuss affordability, which is defined by linking the cost of living to incomes. But the regulatory approach on housing should compare housing prices to the Minimum Profitable Construction Cost, or MPPC. An unfettered construction market won't magically reduce the price of purchasing lumber or plumbing. The best price outcome possible, without subsidies, is that prices hew more closely to the physical cost of building.
In a recent paper with Joseph Gyourko, we characterize the distribution of  prices relative to Minimum Profitable Construction Costs across the U.S... We base our estimates on an "economy" quality home, and assume that builders in an unregulated market should expect to earn 17 percent over this purely physical cost of construction, which would have to cover other soft costs of construction including land assembly.
We then compare these construction costs with the distribution of self-assessed housing values in the American Housing Survey. The distribution of price to MPPC ratios shows a nation of extremes.  Fully, 40 percent of the American Housing Survey homes are valued at 75 percent or less of their Minimum Profitable Production Cost... Another 33 percent of homes are valued at between 75 percent and 125 percent of construction costs.
[...]
But most productive parts of America are unaffordable. The National Association of Realtors data shows median sales prices over $1,000,000 in the San Jose metropolitan area and over $500,000 in Los Angeles. One tenth of American homes in 2013 were valued at more than double Minimum Profitable Production Costs, and assuredly the share is much higher today. In 2005, at the height of the boom, almost 30 percent of American homes were valued at more than twice production costs. 
We should blame the government, especially local government:
How do we know that high housing costs have anything to do with artificial restrictions on supply? Perhaps the most compelling argument uses the tools of Economics 101. If demand alone drove prices, then we should expect to see places that have high costs also have high levels of construction.
The reverse is true.  Places that are expensive don't build a lot and places that build a lot aren't expensive. San Francisco and urban Honolulu have the highest ratios of prices to construction costs in our data, and these areas permitted little housing between 2000 and 2013. In our sample, Las Vegas was the biggest builder and it emerged from the crisis with home values far below construction costs.
The top alternate theory is wrong:
The primary alternative to the view that regulation is responsible for limiting supply and boosting prices is that some areas have a natural shortage of land.
Albert Saiz's (2011) work on geography and housing supply shows that where geography, like water and hills, constrains building, prices are higher.   He also finds that measures of housing regulation predict less building and higher prices.
But lack of land can't be the whole story. Many expensive parts of America, like Middlesex County Massachusetts, have modest density levels and low levels of construction. Other areas, like Harris County, Texas, have higher density levels, higher construction rates and lower prices...
If land scarcity was the whole story, then we should expect houses on large lots to be extremely expensive in America's high priced metropolitan areas. Yet typically, the willingness to pay for an extra acre of land is low, even in high cost areas. We should also expect apartments to cost roughly the cost of adding an extra story to a high-rise building, since growing up doesn't require more land. Typically, Manhattan apartments are sold for far more than the engineering cost of growing up, which implies the power of regulatory constraints (Glaeser, Gyourko and Saks, 2005).
Which regulations are doing the damage?  It's complicated:
Naturally, there are also a host of papers, including Glaeser and Ward (2009), showing the correlation between different types of rules and either reductions in new construction or increases in prices or both. The problem with empirical work any particular land use control is that there are so many ways to say no to new construction. Since the rules usually go together, it is almost impossible to identify the impact of any particular land use control. Moreover, eliminating one rule is unlikely to make much difference, since anti-growth communities would easily find ways to block construction in other ways.
Functionalists are wrong, as usual:
Empirically, there is also little evidence that these land use controls correct for real externalities. For example, if people really value the lower density levels that land use controls create, then we should expect to see much higher prices in communities with lower density levels, holding distance to the city center fixed. We do not (Glaeser and War, 2010). Our attempt to assess the total externalities generated by building in Manhattan found that they were tiny relative to the implicit tax on building created by land use controls (Glaeser, Gyourko and Saks, 2005).
What's to be done?  State governments are our least-desperate hope:
The right strategy is to start in the middle. States do have the ability to rewrite local land use powers, and state leaders are more likely to perceive the downsides of over regulating new construction. Some state policies, like Masschusetts Chapter 40B, 40R and 40S, explicitly attempt to check local land use controls. In New Jersey, the state Supreme Court fought against restrictive local zoning rules in the Mount Laurel decision.  If states do want to reform local land use controls, they might start with a serious cost benefit analysis and then require localities to refrain from any new regulations without first performing cost-benefit analyses of their own.
It will be a great day when constructing new housing regulations is as big a bureaucratic nightmare as constructing new housing is now!"

Meet the Jones Act: A 97-year-old regulatory relic that’s costing you money

From Mark Perry.
"Here are some key parts of my op-ed in today’s Washington Examiner:

On June 5, 1920, during President Woodrow Wilson’s administration, Congress passed the Jones Act, ushering in a form of protectionism for the United States shipping industry and seafaring unions that would eventually drive up energy prices and conflict with the goal of achieving energy independence. Now, 97 years later, the need for the Jones Act is sensibly being questioned and challenged as never before.

Specifically, the Jones Act requires that vessels carrying goods shipped in U.S. waters between U.S. ports must be built, registered, owned, and crewed by U.S. citizens and fly the U.S. flag. Since it costs more to build and operate U.S. ships than it does in other countries, the statute has imposed significantly higher shipping costs on the U.S. economy as compared with more competitive international rates. For example, it costs about three times more to ship oil from the Gulf Coast to New England states than to ship the same amount of oil to Europe.

There are the traditional concerns arising from the Jones Act: higher shipping costs, bottlenecks of oil stored at U.S. ports waiting for tankers, higher oil and gas prices, increased reliance on imported oil, and the potential for slower response to hurricanes and oil spills. The century-old statute is raising anxieties that it’s become a regulatory hurdle making it more difficult to use our country’s rising oil and natural gas production from the shale revolution to reshape the balance of global and economic power.

But two new factors are now fueling the debate.

One is a shortage of Jones Act-eligible tankers available to ship oil from Gulf Coast ports to coastal refineries in Philadelphia and other cities. In 2000, there were 193 Jones Act tankers, but by 2014 only 90 remained. Since there are only a limited number of Jones Act tankers and almost all are under long-term contracts, tanker capacity is stretched tight. This has led to a backup of oil in Gulf Coast ports, which is hampering oil production all along the supply chain but especially the production of unconventional tight oil in North Dakota’s Bakken region and in the Eagle Ford Shale area of South Texas.

There are currently no Jones Act tankers capable of carrying liquefied natural gas. This makes it prohibitively expensive to transport LNG to all domestic ports but especially ports in the noncontiguous regions of Hawaii, Alaska and Puerto Rico. To transport LNG from a West Coast port to Hawaii would require building a much more expensive ship. The upshot is that Hawaii and Puerto Rico have been unable to benefit from abundant and cheap natural gas from the U.S.
If foreign vessels were able to ship U.S.-produced oil from Gulf Coast ports to East Coast refineries, it would save U.S. consumers $1 billion annually. There’s no longer any economic reason to keep the anti-competitive and outdated Jones Act. Congress should repeal the century-old legislative relic of the past."