Saturday, February 28, 2015

Best Web Regulator Not Necessarily Net Neutrality

Market forces have served the Web’s development well, without the heavy hand of regulators

By Greg Ip of the WSJ. Excerpts: 
The new regulations "will enable it [the FCC] to ban a practice known as “paid prioritization”—basically, price discrimination by another name."

"the pipes that carry Internet traffic are overwhelmingly owned and operated by heavily regulated telephone and cable companies. Yet those companies are largely left alone to decide how to handle, and charge for, that traffic.

To date, however, they haven’t charged content providers fees in exchange for moving their data more rapidly over their network. Net neutrality advocates fear if paid prioritization is allowed, deep-pocketed companies like Netflix, Google and Amazon would buy access to the fast lane while cash-strapped startups and nonprofits are relegated to the Internet’s darker, congested corners."

"Nicholas Economides, an economics professor at New York University, says paid prioritization would let ISPs pick the Internet’s winners, which would naturally be the richest players."

"Yet a blanket ban on paid prioritization could also do damage. Instead of Netflix and its customers bearing the cost of the congestion they create on the Internet, all users would."

"“Somebody has to pay for the infrastructure,” said Hal Singer, a consultant and scholar at the Progressive Policy Institute. If ISPs can’t charge content providers, they’ll charge consumers, who generally are more price-sensitive, and the result will be less usage.

A ban on paid prioritization could also kill off experimentation with new business models, such as letting content providers sponsor customers’ access in return for preferential delivery of their content, as Facebook does over wireless networks in many lower-income markets."

"as capacity rises, content providers have less incentive to pay for priority.

Reclassifying broadband Internet would empower the FCC to not just impose net neutrality, but to regulate prices and access to ISPs’ networks."

"Regulating an ISP like a monopoly could undermine the incentive to invest in new capacity and different technology."



The Minimum-Wage Stealth Tax on the Poor

When a fast-food business is forced to raise pay, it also raises prices. Guess who gets hit worst by the increase.

By Thomas MaCurdy, WSJ. He is an economics professor at Stanford. Excerpts:
"One problem is that only about 5% of families have children and are supported by low-wage earnings; another is that higher minimum wages cause some workers to lose their jobs. Advocates of a higher minimum wage argue that the number of workers who gain far exceeds those who lose. Whatever the credibility of this calculus, there is yet another problem: If someone’s income is arbitrarily increased thanks to a legislatively mandated wage increase, someone else must pay for it.
Since economic evidence indicates that higher minimum wages don’t significantly affect employers’ profit rates, advocates instead say that employers will pass on these increased labor costs by raising the prices of their goods and services—and that “society,” or more affluent consumers, will pay these costs.

But will low-income families earn more from an increase in the minimum wage than they will pay as consumers of the now higher-priced goods? My research strongly suggests that they won’t.
The first step in understanding why they won’t is to recognize that minimum-wage workers are typically not in low-income families; instead they are dispersed evenly among families rich, middle-class and poor. About one in five families in the bottom fifth of the income distribution had a minimum-wage worker affected by the 1996 increase, the same share as for families in the top fifth.
Virtually as much of the additional earnings of minimum-wage workers went to the highest-income families as to the lowest. Moreover, only about $1 in $5 of the addition went to families with children supported by low-wage earnings. As many economists already have noted, raising the minimum wage is at best a scattershot approach to raising the income of poor families."

"the 1996 minimum-wage hike raised prices on a broad variety of goods and services. Food purchased outside of the home bore the largest share of the increased consumption costs, accounting for 21% with an average price increase of slightly less than of 2%;"

"Overall, the extra costs attributable to higher prices equaled 0.63% of the nondurable goods purchased by the poorest fifth of families and 0.52% of the goods purchased by the top fifth—with the percentage falling as the income level rose.

The higher prices, in other words, resembled a regressive value-added, or sales, tax,"

"My analysis concludes that more poor families were losers than winners from the 1996 hike in the minimum wage."

Friday, February 27, 2015

3 Charts That Show The FCC is Full of Malarkey on Net Neutrality and Title II

From Nick Gillespie of Reason. Excerpts:
"The typical nightmare scenario that gets trotted out goes something like this: Comcast, the giant ISP that controls NBC Universal, will push its own content on users by simply blocking sites that offer competing content. Or maybe it will degrade the video streams of Netflix and Amazon so no one will want to watch them. Or perhaps Comcast will just charge Netflix a lot of money to make sure its streams flow smoothly over that "last mile" that the ISP controls. Or perhaps Comcast will implement tighter and tighter data caps on the amount of usage a given subscriber can use per month, but exempt its own content from any such limitations.

It's worth noting—indeed, it's worth stressing—that essentially none of these scenarios has come to pass over the past 20 years, despite the lack of Net Neutrality legislation. There have been occasional cases of this or that issue, but they were generally either the result of human error, technological breakdowns, or short-lived policies that customer complaints put an end to. The closest to anything like the nightmare scenarios above involved accusations by Netflix that Comcast and other ISPs were deliberately throttling its streams. Comcast said it was doing no such thing, a perspective supported by researchers at MIT and elsewhere who found that despite huge increases in demand and traffic, Netflix attempted to push its streams via congested parts of the Internet. Netflix eventually agreed to pay Comcast higher fees for what is known as a "peering" arrangement that is not technically a Net Neutrality issue. What the situation actually underscores is that for all the gee-whiz magic of the Internet, it depends ultimately on physical hardware and resources that somebody somewhere has to build, expand, and pay for. Those charges to constantly upgrade and expand capacity will ultimately be borne by content providers such as Netflix, ISPs such as Comcast, and consumers such as you and me."

" proponents typically claim that ISPs have monopolies over their local markets, that they offer shoddy and degraded connections, and that the United States is way behind other, more civilized countries whose governments more heavily regulate the Internet.
With that in mind, here are some charts about the current state of the Internet in the United States and elsewhere, some of which come from the FCC's own analysis.
FCCFCC

The above comes from the FCC's summary of "Internet Access Services: Status as of December 31, 2012" (the most recent document in the series that I found online). Over the four-year period covered, the number, variety, and speed of Internet connections increased significantly. That's not something you would expect if monopoly conditions actually existed."

The above comes from the FCC's summary of "Internet Access Services: Status as of December 31, 2012" (the most recent document in the series that I found online). Over the four-year period covered, the number, variety, and speed of Internet connections increased significantly. That's not something you would expect if monopoly conditions actually existed. Given the increasing centrality of the Internet, you might see more people signing up for service, but a true monopoly would have no interest in or need to improve speed or variety of service.
But it turns out, at least according to the FCC—the very agency that now says it needs to regulate the Internet like a public utility in order to ensure a free and open Internet—that the idea of monopoly ISPs is false.

FCCFCC

According to this FCC chart, 80 percent of households in America have at least two fixed and/or mobile providers that offer "at least 10 Mbps downstream speeds," which until recently was far above what the agency concerned high-speed broadband. In 2010, the FCC defined as service that offered a 4Mbps downstream and 1Mbps upstream. Just a few weeks ago, it arbitrarily upped its definition to be 25Mbps downstream and 3Mbps upstream. (Net oldtimers will remember the old days of 56k modems and the like.) At the end of 2012, says the FCC, fully 96 percent of households had two or more providers offering 6Mbps downstream and 1.5Mbps upstream service."

"One of the other points that is often raised in Net Neutrality debates is that the United States lags behind foreign countries via virtually any comparison: market penetration, connection speed, cost, you name it. Last November, Bret Swanson, a researcher at The American Enterprise Institute, produced a compelling rebuttal to such arguments, which often relied on misleading data (such as advertised maximum speeds rather than actual delivered speeds) and dubious measures of network capabilities. In "Internet traffic as a basic measure of broadband health," Swanson argues that
Internet traffic volume is an important indicator of broadband health, as it encapsulates and distills the most important broadband factors, such as access, coverage, speed, price, and content availability. US Internet traffic is two to three times higher than that of most advanced nations, and the United States generates more Internet traffic per capita and per Internet user than any major nation except for South Korea.
Here's one of his figures:
AEIAEI

The thrust of Swanson's basic argument is also supported by the annual "State of the Internet" reports produced by cloud-computing service Akamai, which typically shows the United States doing well in most comparisons."

"To the extent that cable companies once had absolute local monopolies, it was precisely due to local governments granting them that. There are all sorts of things that local, state, and federal governments—not to mention nominally independent agencies such as the FCC—might do to reduce or remove barriers to entry for competitors. As FCC Commissioner Ajit Pai told Reason in an interview released yesterday,
There are a lot of markets where consumers want and could use more competition. That’s why since I’ve become the commissioner, I’ve focused on getting rid of some of the regulatory underbrush that stands in the way of some upstart competitors providing that alternative—streamlining local permit rules, getting more wireless infrastructure out there to give a mobile alternative, making sure we have enough spectrum in the commercial marketplace—but these kind of Title II common carrier regulations ironically will be completely counterproductive. It’s going to sweep a lot of these smaller providers away who simply don’t have the ability to comply with all these regulations, and moreover it’s going to deter investment in broadband networks, so ironically enough, this hypothetical problem that people worry about is going to become worse because of the lack of competition.
Pai calls the new rules "a solution that won't work to a problem that doesn't exist." I think he's right about that and it should give even the most uncritical supporter of the FCC action pause that the Electronic Frontier Foundation (EFF), a robust supporter of Net Neutrality, has seen fit to write a "Dear FCC" warning:
The FCC will evaluate “harm” based on consideration of seven factors: impact on competition; impact on innovation; impact on free expression; impact on broadband deployment and investments; whether the actions in question are specific to some applications and not others; whether they comply with industry best standards and practices; and whether they take place without the awareness of the end-user, the Internet subscriber.
There are several problems with this approach.  First, it suggests that the FCC believes it has broad authority to pursue any number of practices—hardly the narrow, light-touch approach we need to protect the open Internet. Second, we worry that this rule will be extremely expensive in practice, because anyone wanting to bring a complaint will be hard-pressed to predict whether they will succeed. For example, how will the Commission determine “industry best standards and practices”? As a practical matter, it is likely that only companies that can afford years of litigation to answer these questions will be able to rely on the rule at all. Third, a multi-factor test gives the FCC an awful lot of discretion, potentially giving an unfair advantage to parties with insider influence."

Erosion has been gnawing away at the Alaska coast for many, many decades

Repeating News Story: Eroding Shorelines and Imperiled Coastal Villages in Alaska by Patrick J. Michaels and Paul C. "Chip" Knappenberger of Cato. Excerpt:

"With or without human-caused climate change, bluffs and barrier islands along the coast of northwestern Alaska are inherently unstable and not particularly good places to establish permanent towns. This is probably one of the reasons the natives were largely nomadic.

“Were,” we say, because ironically, as pointed out by the Post’s Chris Mooney, research indicates that the abandonment of the nomadic ways was encouraged/hastened by the establishment of government schools!

Nor are unstable Alaskan shores anything new.

Several major environmental studies were carried out in the mid-20th century and all found extremely high rates of erosion resulting from frequent and intense storm systems. One, from nearly 50 years ago, even went as far as to suggest that a warming climate from enhanced carbon dioxide emissions would make erosion worse and gave this advice:
[C]are should be exercised in the selection of building sites and construction methods. The best sites would be at least 30 feet above sea level and either inland or along a coast which is not eroding. If a site which is low and near the ocean must be used, then a protected position leeward of a point or island would be best.  
Apparently, in places like Kivalina, this advice went unheeded.

We reviewed the situation along the Alaskan shoreline in a piece we wrote back in 2007. What we concluded then remains the case today:
Clearly, erosion has been gnawing away at the Alaska coast for many, many decades and this fact has been known for equally as long. Wind and waves acting on soil held together by ice acts through a positive feedback to expose more frozen soil to the above-freezing temperatures of summer and the warm rays of sunshine, softening it for the next round of waves and wind. And so the process continues. A decline in near-shore ice cover helps to exacerbate the process. Ignoring these well-known environmental conditions has led to the unfortunate situation today where Inuit villages are facing an imminent pressure to relocate. This situation has less to do with anthropogenic climate change than it does to poor planning in the light of well-established environmental threats—threats that have existed for at least the better part of the 20th century.
Despite periodically cycling into the news, nothing really is new."

Thursday, February 26, 2015

McCloskey on Piketty

Via David Henderson of EconLog.
"I'm starting to work on a paper that I'll give at the APEE (Association for Private Enterprise Education) meetings in Cancun in April. The working title for the paper, although I might change it when it comes time to submit to a journal, is "Economic Inequality: When and How Does It Matter?"
One of the first things on my agenda was to read Deirdre McCloskey's 50+ (!) page review of Piketty's Capital in the 21st Century. Many people in the blogosphere and on Facebook have talked about it and, on April 14, 2014, our own guest blogger, Alberto Mingardi, presciently wrote "my ideal candidate [for a review of Piketty] would be Deirdre McCloskey.

Although I'm a big fan of McCloskey qua economist and I'm a fan of her writing, I'm not as big a fan of her writing as many of my free-market friends--I have in mind Pete Boettke in particular--are. I sometimes find myself saying, when reading McCloskey, "Quit being so cute and get to the point." So when I saw that the review was over 50 pages long, I thought, "OMG, there she goes again."
I was wrong.

McCloskey's review is a masterpiece. She beautifully weaves together economic history, simple price theory, basic moral philosophy, and history of economic thought. Whereas I had mentally put aside an hour to read and think, it took only about 20 minutes. I highly recommend it.

Some highlights:

The Data

Yet in fact his own capta, his own things-ingeniously-seized by his research, as he candidly admits without allowing the admission to relieve his pessimism, suggest that only in Canada, the U.S., and the U.K. has the inequality of income increased much, and only recently. "In continental Europe and Japan, income inequality today remains far lower than it was at the beginning of the twentieth century and in fact has not changed much since 1945" (p. 321, and Figure 9.6). Look, for example, at page 323, Figure 9.7, the top decile's share of income, 1900-2010 for the U.S.A., the U.K., Germany, France, and Sweden. In all those countries r > g. Indeed, it has been so, with very rare exceptions, since the beginning of time. Yet after the redistributions of the welfare state were accomplished, by 1970, inequality of income did not much rise in Germany, France, and Sweden. In other words, Piketty's fears were not confirmed anywhere 1910 to 1980, nor anywhere in the long run at any time before 1800, nor anywhere in Continental Europe and Japan since World War II, and only recently, a little, in the United States, the United Kingdom, and Canada (Canada, by the way, is never brought into his tests).
Lousy Predictions
The inconsequence of Piketty's argument, in truth, is to be expected from the frailties of its declared sources. Start by adopting a theory by a great economist, Ricardo, which has failed entirely as a prediction. Landlords did not engorge the national product, contrary to what Ricardo confidently predicted. Indeed the share of land rents in national (and world) income fell heavily nearly from the moment Ricardo claimed it would steadily rise. The outcome resembles that from Malthus, whose prediction of population overwhelming the food supply was falsified nearly from the moment he claimed it would happen.
Incidentally, this same point about land rents was one I made in my criticism of Tyler Cowen's book, The Great Stagnation
 
The Pyramid (How the Rich Innovators Get Only a Small Fraction of the Value they Create)

The economist William Nordhaus has calculated that the inventors and entrepreneurs nowadays earn in profit only 2 percent of the social value of their inventions. If you are Sam Walton the 2 percent gives you personally a great deal of money from introducing bar codes into stocking of supermarket shelves. But 98 percent at the cost of 2 percent is nonetheless a pretty good deal for the rest of us. The gain from macadamized roads or vulcanized rubber, then modern universities, structural concrete, and the airplane, has enriched even the poorest among us.
Piketty's Failure to Count Human Capital
For example--a big flaw, this one--Piketty's definition of wealth does not include human capital, owned by the workers, which has grown in rich countries to be the main source of income, when it is combined with the immense accumulation since 1800 of capital in knowledge and social habits, owned by everyone with access to them. Therefore his laboriously assembled charts of the (merely physical and private) capital/output ratio are erroneous. They have excluded one of the main forms of capital in the modern world. . . . He asserts mysteriously on page 46 that there are "many reasons for excluding human capital from our definition of capital." But he offers only one: "human capital cannot be owned by any other person." Yet human capital is owned precisely by the worker herself. Piketty does not explain why self-ownership à la Locke without permitting alienation is not ownership. If I own and operate improved land, and the law prevents its alienation (as some collectivist laws do), why is it not capital? Certainly, human capital is "capital": it accumulates through abstention from consumption, it depreciates, it earns a market-determined rate of return, it can be made obsolete by creative destruction.
Elementary Economic Error
"To be sure, there exists in principle a quite simple economic mechanism that should restore equilibrium to the process [in this case the process of rising prices of oil or urban land leading to a Ricardian Apocalypse]: the mechanism of supply and demand. If the supply of any good is insufficient, and its price is too high, then demand for that good should decrease, which would lead to a decline in its price." The (English) words I italicize clearly mix up movement along a demand curve with movement of the entire curve, a first-term error at university.
I caught this too, when I read Piketty, but I didn't mention it in my review because I failed to see the enormity of the mistake that this one elementary mistake of Piketty's led to. But McCloskey does see it. See what she does with it, on pages 91-93.
There are other things I didn't highlight: some of them because that would make this overly long post even longer and others because I noted the problem in my review."

Most federal agencies are not making meaningful distinctions in performance ratings and bonuses for senior executives

See Federal Workers: Performance, Pay, and Firing by Chris Edwards of Cato.
"Americans are concerned about the performance of the federal bureaucracy. Many people think that federal workers are overpaid and underworked. Some recent news stories provide fresh input to the debate. 
A story yesterday at GovExec.com regards pay and performance. The federal pay structure is less efficient than private pay structures because it is generally based on seniority, not job performance. But GovExec.com finds that attempts to introduce federal performance pay have not worked very well either:
Most federal agencies are not making meaningful distinctions in performance ratings and bonuses for senior executives, according to a new watchdog report. About 85 percent of career senior executives received “outstanding” or “exceeds fully successful” ratings in their performance reviews between fiscal years 2010 and 2013, at the same time that agencies have made smaller distinctions in the amount of individual bonuses, the Government Accountability Office found. This has created a system where nearly everyone is considered outstanding…
The level of federal pay is the focus of another recent story. GovExec.com reports on the large number of workers who enjoy high pay:
More than 16,900 federal employees took home in excess of $200,000 in base salary in 2014, according to a partial database of federal salary data.
The report is based on data from FedSmith.com, which is an excellent source of federal workforce information. Fedsmith’s database can list employees and their salaries by agency. For example, there are 159 people at the Small Business Administration who made more than $150,000 in wages in 2014. That’s 159 too many in my view, as the agency should be closed down.

Another recent article regards federal firing. The Federal Times confirms the extraordinarily low firing rate in the federal government compared to the private sector:
Even as lawmakers press for greater accountability within government, agencies have fired fewer employees than at any time in the last 10 years, according to data from the Office of Personnel Management.
Agencies fired 9,537 federal employees for discipline or performance issues in fiscal 2014, down from 9,634 in 2013 and down from a high of 11,770 in fiscal 2010, according to the data. The firing rate held at 0.46 percent of the workforce in both fiscal 2013 and fiscal 2014 — the lowest rate in 10 years.
The private sector fires nearly six times as many employees — about 3.2 percent — according to the Bureau of Labor Statistics, and whether the government fires too few people or just not the right people is the subject of continued debate.
For more on the federal workforce, see here."

Wednesday, February 25, 2015

Evidence shows that affluence in the US is much more fluid and widespread than the rigid class structure narrative suggests

From Mark Perry.

"
income income1
Most of the discussions on income inequality, the reviled “top 1%,” and the hand-wringing about the share of income or wealth going to the “top 1%” typically assume that the top 1/5/10% and bottom 99/95/90% percentile groups by income (pick your favorite percentage) operate like private clubs that are closed to new members. That is, many people assume that various income groups are static and fixed, with very little movement or fluidity among those income groups over one’s career or lifetime. Start out life in the bottom 20% or bottom 50%? Too bad, you’re stuck there forever no matter how hard you try or work, and you can forget about ever being part of the top 1/5/10%. Born into the top 1/5/20%? Great, you’ve got a lifetime membership in that static, closed group.

That rather simplistic interpretation of a static economy is really nothing like the very fluid and dynamic world we actually live in, with significant degrees of income and wealth mobility/fluidity over one’s lifetime. That’s the main conclusion of a new study titled “The Life Course Dynamics of Affluence” by Thomas Hirschl and Mark Rank, based on an empirical investigation of individual lifetime income data in the Panel Study of Income Dynamics over a 44-year period.

For example, one of the authors’ key findings is that by age 60, nearly 70% of the US population experienced at least one year in the top 20% by income, more than half (53.1%) were in the top 10% for at least one year, more than one-third (36.4%) spent at least one year in the top 5%, and 11.1% (one of out nine) spent at least one year with income in the top 1% (see top chart above). Those findings of significant income fluidity for one year periods are further supported when the authors look at longer time periods. For example, although 11.1% of  Americans made it into the top 1% for at least one year, only 1.1% (1 in 91) of Americans stayed in the top 1% for ten years or more during their lifetimes, and only about half that amount (0.60%, or 1 in 167) were able to stay in the top 1% for ten consecutive years (see bottom chart above). That should shatter the myth that the top 1% is a fixed club closed to new members! Likewise, more than 1 out of 3 Americans (36.4%) spent at least a year in the top 5%, but only about 1 in 15 (6.6%) remained there for ten years or more, and only about 1 in 27 (3.7%) spent 10 consecutive years in the top 5%. Lots of movement and fluidity.

Here is a summary of the main findings of the study (emphasis added):

1. There is substantial fluidity in top-level income over ages 25 to 60. Thus a static image of top-level income tenure is at odds with the empirics of how people live out their life course.
2. The study findings indicate that top-level income categories are heterogeneous with respect to time, comprised of a relatively small set of persistent members, and a larger set of short-term members. For example, although over half of the U.S. population experienced one or more years of top 10th percentile income, only about half of this set attained top 10th percentile income for three consecutive years, and fewer than 7 percent persisted at this level for 10 consecutive years. Thus the lifetime top 10th percentile is mostly transitory, moving in and out of this percentile over the life course.
3. There are two contentious social implications related to the finding that top-level income is fluid across time. One is that there is widespread opportunity for top-level income. The opportunity to attain top-level income is widely accessed, and many reap the benefits of opportunity. It is also the case that attaining top-level income in one year does not necessarily predict it for the following year. Indeed, most who attain top-level income do so for a limited number of years, and to the extent that they have expectations of persistence, have some probability of experiencing insecurity relative to their expectations. Income fluidity is a double-edged sword, creating opportunity for many, along with insecurity that this opportunity may end sooner than hoped for.
4. We interpret the widespread attainment of top-level income as materially consistent with the way the majority of Americans tend to characterize their society. In a recently published study, we report evidence that most Americans hold fast to the belief that hard work will be rewarded economically, and the present study finds evidence that many Americans do, in fact, attain top-level income. This evidence is counter-intuitive vis-à-vis popular interpretations regarding the 1 percent versus the 99 percent, and we believe that our findings serve to qualify these interpretations. When interpreting social and economic relationships and trends, it is important to consider not simply one, or even many, cross-sections in time, but also the extent of social and economic mobility across the life course. Individuals experience their lives not as a disconnected set of years, but rather as a continuous lifetime of experience.

MP: Thanks to Thomas Hirschl and Mark Rank More for bringing some much-needed attention to the significant income mobility and fluidity in the American economy, which directly contradicts the narrative we hear all the time of a rigid class structure based on static income groups like the top 1/5/10%, a static bottom 20/50/99%, etc.

As one of the authors (Mark Rank) pointed out last year in the New York Times:
It is clear that the image of a static 1 and 99 percent is largely incorrect. The majority of Americans will experience at least one year of affluence at some point during their working careers. (This is just as true at the bottom of the income distribution scale, where 54 percent of Americans will experience poverty or near poverty at least once between the ages of 25 and 60).
Ultimately, this information casts serious doubt on the notion of a rigid class structure in the United States based upon income. It suggests that the United States is indeed a land of opportunity, that the American dream is still possible — but that it is also a land of widespread poverty. And rather than being a place of static, income-based social tiers, America is a place where a large majority of people will experience either wealth or poverty — or both — during their lifetimes.
Rather than talking about the 1 percent and the 99 percent as if they were forever fixed, it would make much more sense to talk about the fact that Americans are likely to be exposed to both prosperity and poverty during their lives, and to shape our policies accordingly. As such, we have much more in common with one another than we dare to realize."

Corporations, like all human institutions, are great engines for making mistakes. The only reason they seem so competent is that companies who make too many mistakes go out of business, and we don't have them around for comparison.

See The Church of Wal-Mart by Megan McArdle. 
"I got a lot of responses to my post last week on Wal-Mart's decision to raise the minimum wage many of its employees earn to $10 an hour next year. One variety of response stood out: the folks who said "Wal-Mart is doing this because it's good for its business."

It stood out because it is almost right, but not quite. The correct statement is that "Wal-Mart is doing this because it thinks it's good for its business." Never ignore the possibility that Wal-Mart could be completely wrong.

I remark on this because some of the arguments I saw verged upon what I've come to think of as "corporation theology": the belief that if a corporation is doing something, that thing must be incredibly profitable. This is no less of a faith-based statement than the Immaculate Conception of Mary. Yet it is surprisingly popular among commentators, not just on the right, but also on the left.
For example, way back in the early 2000s, I wrote an article about potential class-action lawsuits against fast-food companies (the lawsuits actually materialized just a few months later, were dismissed by an exasperated judge and faded into obscurity). I interviewed many, many folks in the anti-obesity movement, including some fairly famous authors. And I asked one of the authors what I thought was an easy question: "I loved your book, but I'm struggling with something. You have a lengthy section about fast-food advertising, and I haven't been able to find the studies that show fast-food advertising causes kids to increase their consumption of fast food. Could you point me to them?"

The author seemed taken aback, almost confused by the question. It emerged that they hadn't looked at any actual studies, but corporations spend huge amounts of money advertising to children, so obviously, it must do something.


This left-wing writer was evincing considerably more faith than I have in the American corporation. Corporations do dumb stuff all the time -- for decades, even. Moreover, advertising has multiple purposes. It can of course induce you to consume more of a product, but frankly, no matter how much Pepperidge Farm advertises, it's probably not going to dramatically increase America's consumption of prepackaged cookies. So why does it advertise? Because it wants you to choose a Milano instead of an Oreo or one of them newfangled biscotti.

This is corporation theology. Of course, you also see it on the right -- arguments that if some product were good or desirable, a corporation would already have provided it. The entire history of human progress argues against this theory.

As these two examples suggest, corporation theology gets trimmed to personal and ideological convenience, as all theologies often are: A liberal is capable of simultaneously believing that market failures abound in industries he or she would like to regulate, and also that Costco knows how to run Wal-Mart's labor policy better than Wal-Mart does; a conservative, the inverse. Both are wrong. Corporations, like all human institutions, are great engines for making mistakes. The only reason they seem so competent is that companies who make too many mistakes go out of business, and we don't have them around for comparison.

Wal-Mart's decision to raise its minimum wage is undoubtedly what it believes to be the soundest business strategy. Many of its investors disagreed. But we won't know whether this was good for business until we see what happens to its profits a few years down the line. And frankly, we may not know even then, what with all the other changes going on in its market. Business, unlike theology, rarely offers final answers.
  1. No, no, don't e-mail me to tell me about your favorite study of this subject. I've seen some studies, all of which seemed to me to be absurdly weak (badly designed; plus, if fast-food advertising works so well on kids, how come public health advertising doesn't?). But this is beside the point. The point is that this author believed that advertising must work, because after all, companies do so much of it."

Tuesday, February 24, 2015

The Minimum Wage and Monopsony

By David Henderson of EconLog.
"I promised a few weeks ago to "write a further note explaining a more-sophisticated way of understanding the harmful effects of the minimum wage." This isn't it.

The reason is that three issues came up in the comments and e-mails and I want to handle them first. The promised follow-on post will come in another few days.

The three issues were:

1. Arguments for the minimum wage as a way of helping some low-wage workers even if the employment effects are negative.
2. Questions about why imposing a minimum wage in a monopsony situation could increase employment and efficiency.
3. A question about why unskilled workers are actually the least likely to face monopsony.
Here are my responses in order.

1. I should have specified in my original post, but didn't, that I was giving the student who asked the original question the best (and really only) argument one could give for a minimum wage increasing efficiency. It's easy to come up with arguments for a minimum wage when you equally weight the dollar losses to those who lose their jobs and the dollar gains to those who get higher wages. But then you're leaving out the main losers: employers and, ultimately, consumers who pay somewhat higher prices for the goods and services produced by the minimum-wage workers. I was excluding these kinds of arguments because, to repeat, I was looking for only pro-efficiency (in the economist's sense of efficiency--gains from the change in $ terms exceed losses in $ terms) arguments.

2. The student at Susquehanna University told me by e-mail that he didn't understand the monopsony argument. I still have not got around to learning how to put a supply demand graph on Econlog. I will learn, but not today. Fortunately, I don't have to reinvent the on-line monopsony graph. Here's the best one I have found.

In the figure, Figure 14.9, if the employer is a monopsonist unconstrained by a minimum wage, he will hire Lm of labor because at that point, his marginal revenue product (MRP) equals marginal factor cost (MFC). Reading down from the intersection of the two lines to the supply curve (S), we get that the wage is $4 an hour.

But the efficient amount of labor is the amount where the supply curve intersects MRP. That point, unfortunately, is not labeled in the graph. You can tell, though, just by eye-balling, that that point is at a wage of approximately $4.70 an hour. So if the government imposes a minimum wage of $5.00 an hour, it will cause the MFC curve to be at $5 up to the point where $5 and the supply curve intersect, and then will jump to the old MFC. With this new MFC curve, they will employ L2. Notice that is more than the number of people employed when there was no minimum. QED.

The government could make the situation even more efficient by setting the minimum wage at $4.70 an hour rather than $5.00 an hour. I leave that as an exercise for the reader.

3. In my previous post, I wrote:

I don't find the monopsony claim persuasive. Monopsony requires that the employer have no or few competitors trying to hire the same kind of labor. It is precisely the fact that the workers are unskilled that gives them many potential employers.
In response to this, Sam Raptis wrote:

Could anyone explain the reasoning behind Henderson's claim toward the end of the article that "It is precisely the fact that the workers are unskilled that gives them many potential employers." is true? I'm not doubting it, but I found it confusing since I would tend to think the opposite.

Yes. Many, many potential employers want relatively unskilled labor for unskilled tasks. So there are many potential employers for the unskilled. Go to Home Depot sometime and watch how many people drive up with cars and pickup trucks and hire (in my area, anyway) Hispanic workers for a day. Now imagine that the worker is an astronaut. This is highly skilled. How many potential employers are there? Or, imagine that the worker is a professional basketball player and is highly skilled. How many potential employers are there?"

Five Myths about Net Neutrality

By Brent Skorup of Mercatus.
"In view of the impending Federal Communications Commission (FCC) vote to regulate the Internet under Title II of the New Deal–era Communications Act, it is critical to understand what these “net neutrality” rules will and will not do.

Columbia Business School professor Eli Noam says net neutrality has “at least seven different related but distinctive meanings….” The consensus is, however, that net neutrality is a principle for how an Internet Service Provider (ISP) or wireless carrier treats Internet traffic on “last mile” access — the connection between an ISP and its customer. Purists believe net neutrality requires ISPs to treat all last-mile Internet traffic the same. The FCC will not enforce that radical notion because networks are becoming more “intelligent” every year and, as a Cisco network engineer recently put it, equal treatment for all data packets “would be setting the industry back 20 years.”

Nevertheless, because similar rules were twice struck down in federal court, the FCC is crafting new net neutrality rules for ISPs and technology companies. Many of these Title II provisions reined in the old Bell telephone monopoly and are the most intrusive rules available to the FCC. The net neutrality rules are garnering increased public scrutiny because they will apply to one of the few bright spots in the US economy — the technology and communications sector.

As with many complex concepts, there are many myths about net neutrality. Five of the most widespread ones are dispelled below.


Reality: Prioritization has been built into Internet protocols for years. MIT computer scientist and early Internet developer David Clark colorfully dismissed this first myth as “happy little bunny rabbit dreams,” and pointed out that “[t]he network is not neutral and never has been.” Experts such as tech entrepreneur and investor Mark Cuban and President Obama’s former chief technology officer Aneesh Chopra have observed that the need for prioritization of some traffic increases as Internet services grow more diverse. People speaking face-to-face online with doctors through new telemedicine video applications, for instance, should not be disrupted by once-a-day data backups. ISPs and tech companies should be free to experiment with new broadband services without time-consuming regulatory approval from the FCC. John Oliver, The Oatmeal, and net neutrality activists, therefore, are simply wrong about the nature of the Internet.


Reality: Even while lightly regulated, the Internet will remain open because consumers demand an open Internet. Recent Rasmussen polling indicates the vast majority of Americans enjoy the open Internet they currently receive and rate their Internet service as good or excellent. (Only a small fraction, 5 percent, says their Internet quality is “poor.”) It is in ISPs’ interest to provide high-quality Internet just as it is in smartphone companies’ interest to provide great phones and automakers’ interest to build reliable cars. Additionally, it is false when high-profile scholars and activists say there is no “cop on the beat” overseeing Internet companies. As Federal Trade Commissioner Joshua Wright testified to Congress, existing federal competition laws and consumer protection laws — and strict penalties — protect Americans from harmful ISP behavior.


Reality: The FCC’s net neutrality rules are not an effective way to improve broadband competition. Net neutrality is a principle for ISP treatment of Internet traffic on the “last mile” — the connection between an ISP and a consumer. The principle says nothing about broadband competition and will not increase the number of broadband choices for consumers. On the contrary, net neutrality as a policy goal was created because many scholars did not believe more broadband choices could ensure a “neutral” Internet. Further, Supreme Court decisions lead scholars to conclude that “as prescriptive regulation of a field waxes, antitrust enforcement must wane.” Therefore, the FCC’s net neutrality rules would actually impede antitrust agencies from protecting consumers.


Reality: Intelligent management of Internet traffic and prioritization provide useful services to consumers. Net neutrality proponents call zero-rating — which is when carriers allow Internet services that don’t subtract from a monthly data allotment — and similar practices “dangerous,” “malignant,” and rights violations. This hyperbole arises from dogma, not facts. The real-world use of prioritization and zero-rating is encouraging and pro-consumer. Studies show that zero-rated applications are used by millions of people around the globe, including in the United States, and they are popular. In one instance, poor South African high school students petitioned their carriers for free — zero-rated — Wikipedia access because accessing Wikipedia frequently for homework was expensive. Upon hearing the students’ plight, Wikipedia and South African carriers happily obliged. Net neutrality rules like Title II would prohibit popular services like zero-rating and intelligent network management that makes more services available.


Reality: First, the FCC’s rules will make broadband more expensive, not cheaper. The rules regulate Internet companies much like telephone companies and therefore federal and state telephone fees will apply to Internet bills. According to preliminary estimates, millions of Americans will drop or never subscribe to an Internet connection because of these price hikes. Second, the FCC’s rules will not make Netflix and webpages faster. The FCC rules do not require ISPs to increase the capacity or speed of customers’ connections. Capacity upgrades require competition and ISP investment, which may be harmed by the FCC’s onerous new rules.

To see more from Mercatus scholars on net neutrality, visit mercatus.org/netneutrality."

Monday, February 23, 2015

More Money Does Not Equal Better Public School

Education policy meets Economics 101.

By A. Barton Hinkle of Reason.
"The Virginia Education Association and others have pointed out that state spending per student has fallen in recent years. Adjusted for inflation, it’s now at least 16 percent lower than it was in 2008-09. This is presented as grim news—a dark sign that the state’s public schools are falling behind, perhaps coming to a breaking point.

If so, then word has yet to reach the Virginia Department of Education. In October, the department boasted that students in the commonwealth are doing better on the SATs: “Virginia 2014 public school graduates achieved significant gains and outperformed their peers nationwide on the SAT, according to results released today by the College Board.”

Virginians are ahead of the pack by 23 points in reading, 11 points in math, and 15 points in writing. That announcement came shortly after the VDOE announced that the statewide on-time graduation rate was approaching 90 percent, and that seven public schools had received National Blue Ribbon awards from U.S. Education Secretary Arne Duncan.

Last August, the department sent out a press release celebrating the fact that “Student achievement improved during 2013-14 on challenging mathematics Standards of Learning” — along with another cheering the news that “Virginia students outperformed their peers nationwide by significant margins on the ACT this year as the number of the commonwealth’s high school seniors taking the college-admissions examination continued to grow.”

Meanwhile, the VEA insists “our public schools are in serious need” because of the “dangerously downward trend in state spending on public schools.”

The group isn’t quite so quick to point out a big offsetting factor: In Virginia, localities pay 51 cents of every dollar spent on the schools, and the federal government pays another 8 cents. So a 16 percent cut in state funding per pupil does not mean a 16 percent cut in total funding per pupil.

As PolitiFact Virginia noted when it verified the VEA’s claim, “Data taking into account all three money sources shows an average total of $11,316 was spent per Virginia student in the 2008-09 school year and that fell to $11,257 in 2012-13, the latest year available. When adjusted for inflation, that’s an 8.6 percent drop.”

That might be one reason Gov. Terry McAuliffe could announce last year that “Virginia again boasts the nation’s third-highest percentage of public high school seniors qualifying for college credit on Advanced Placement examinations.” And why, a month before that, the state Board of Education honored “57 schools and two school divisions for raising the academic achievement of economically disadvantaged students.” And so on.

None of this should come as a big shock. The correlation between school spending and student achievement is far weaker than commonly thought. A couple of years ago, for instance, researchers studying Philadelphia reported that district “spent approximately $2,000 less per student than its peer districts and yet generated slightly better results on state tests.” The Washington Post ran the story under the headline, “Surprising New Research on School Funding.”

Why surprising? Since 1970, inflation-adjusted spending per pupil has doubled. Class sizes have shrunk. Yet academic gains haven’t come close to keeping pace. If test scores don’t soar when spending does, then why should they plunge when spending plunges?

It is not mere coincidence that similar patterns show up in health care, which—like schooling—is heavily dominated by government involvement. And as the push for health care reform gained steam in 2009, supporters took to pointing out seemingly curious data from the Dartmouth Atlas of Health Care. They showed huge variations in Medicare outlays without any corresponding variations in health outcomes. As Atul Gawande wrote in The New Yorker, “The more money Medicare spent per person in a given state, the lower that state’s quality ranking tended to be. In fact, the four states with the highest levels of spending — Louisiana, Texas, California, and Florida — were near the bottom of the national rankings on the quality of patient care.”

It’s much the same with education. New York and New Jersey spend a similarly high sum per pupil: more than $19,000 a year. Forty-three percent of New Jersey eighth-graders are proficient or better in reading; 33 percent of New York’s are. Thirty-three is also the same percentage of reading-proficient eighth-graders in Utah, which spends less than $7,300 per pupil. Indiana spends about $8,000 and comes in just behind Utah. Rhode Island, which comes in several points behind both of them, spends more than twice as much as they do: nearly $18,000 per pupil.

One possible rebuttal to all of this might be that Virginia’s overall 8 percent cut in per-pupil funding will show up later. Perhaps kids who are in third grade now will perform worse in eighth grade than they otherwise would. Perhaps today’s eighth-graders will do worse in 12th grade, and so on. But that’s just speculation. So far, performance hasn’t suffered from cuts.

Another response would tender the observation that teacher pay is often wretched. Some educators qualify for food stamps. The other day a Chesterfield teacher told county leaders his salary makes his children eligible for free school lunches. Teachers should indeed be paid better—and it’s worth asking why they haven’t gained more from the big increases in education spending over the past few decades. Surely some of the funds that go to expanding central office bureaucracies should go to the classroom instead.

That’s an argument about equity, though—not effectiveness. If you’re a teacher struggling to pay the bills every month, the 16 percent drop in state spending on schools is an outrage. But if you’re a taxpayer struggling to pay the bills every month, getting the same benefit at a lower cost looks like a pretty good deal."

Chinese scientists find that increases in sea ice around Antarctic were not consistent with human-caused climate change

By Paul C. "Chip" Knappenberger and Patrick J. Michaels of Cato.
"First up is a new study comparing climate model projections with observed changes in the sea ice extent around Antarctica.

While everyone seems to talk about the decline in the sea ice in the Northern Hemisphere, considerably less discussion focuses on the increase in sea ice in the Southern Hemisphere. If it is mentioned at all, it is usually quickly followed by something like “but this doesn’t disprove global warming, it is consistent with it.”

But, even the folks delivering these lines probably realize that the latter bit is a stretch.
In fact, the IPCC and others have been trying downplay this inconvenient truth ever since folks first started to note the increase. And the excuses are getting more involved.

A new study pretty much exposes the emperor.

A team of three Chinese scientists led by Qi Shu compared the observed trends in Southern Hemisphere sea ice extent (SIE) with those projected by the collection of climate models used to forecast future climate changes by the U.N.’s Intergovernmental Panel on Climate Change (IPCC). In a nutshell, they found increases in sea ice around Antarctic were not consistent with human-caused climate change at all—or at least not by how climate models foresee it taking place. Figure 1 shows the extent of the mismatch—rather shocking, really.
 Figure 1. Comparison of observed (blue) and mean climate model projected (red) changes in Antarctic sea ice extent  (from Shu et al., 2015).
Figure 1. Comparison of observed (blue) and mean climate model projected (red) changes in Antarctic sea ice extent  (from Shu et al., 2015).
Shu et al. write:
The linear trend of satellite-observed Antarctic SIE is 1.29 (±0.57) x 105 km2 decade-1; only about 1/7 [climate] models show increasing trends, and the linear trend of [multi-model mean] is negative with the value of -3.36 (±0.15)x105 km2 decade-1.
This should pretty much quell talk that everything climate is proceeding according to plan.
For all the details, be sure to check out the full paper (which is open access).

The next paper worth having a look at is one that examines the impact of urbanization on thunderstorm development in the southeastern U.S.

Recall that a global warming talking point is greenhouse gas-induced climate change will result in more episodes of intense precipitation.

As with all manner of extreme weather events, the association is far from being so simple. All sorts of confounding factors impact the observed changes in precipitation and make disentangling and identifying any impact from anthropogenic global warming nearly impossible. We have discussed this previously, and this new research provides more such evidence.

A team of researchers led by Alex Haberlie developed a method of locating “isolated convection initiation” (ICI) events from historic radar data. ICI events are basically thunderstorm kickstarters. Examining 17 years of data for the region around Atlanta, the team found:
Results reveal that ICI events occur more often over the urban area compared to its surrounding rural counterparts, confirming that anthropogenic-induced changes in land cover in moist tropical environments lead to more initiation events, resulting thunderstorms and affiliated hazards over the developed area.
In other words, pointing to increases in thunderstorms and declaring greenhouse gas emissions the culprit is overly simplistic and potentially misleading.

The full details are available here, although they are behind a paywall. But even a read of the abstract will prove enlightening. Turns out climate change is not so simple.

And finally, as the number of people shivering from cold in the Eastern U.S. increases, so, too, does the effort to link the cold to global warming—mostly through feedback from declines in Arctic sea ice.
While we have been over this before—the linkages are less than robust—we’re always happy to see new research on the topic.

Just published is a paper by University of Washington’s Dennis Hartmann that examines the causes behind last winter’s brutal cold in the eastern U.S. Instead of a link to sea ice and high latitude conditions, he found tropical sea surface temperature (SST) anomalies in the Pacific Ocean were a driving force behind the cold air outbreaks last winter.
Hartmann further notes as of his writing the paper (in January 2015) the same conditions present last winter persisted into this one. The current situation bears this out.

This passage from Hartmann’s paper bears repeating and is worth keeping in mind:
This result is consistent with a long history of observational and modeling work indicating that SST anomalies in the extratropics are strongly driven by atmospheric circulation anomalies, while SST anomalies in the tropics can strongly force the atmospheric circulation.
In other words, while extratropical circulation drives our daily weather, tropical sea surface temperature patterns drive the circulation. Thus, don’t look to the Arctic to explain winter’s weather, but rather the Tropics. Hopefully, John Holdren etc. will take this to heart."

Sunday, February 22, 2015

Minimum wages do in fact over several years slow job growth for low-skilled workers

See Minimum Accuracy, Maximum Illogic by Don Boudreaux of Cafe Hayek.
"Here’s a letter to The Guardian; (I thank Rush Olson for the pointer to the Guardian report):
Aghast that many businesses have the gall to lobby against legislation that arbitrarily raises their costs, you assert that “a large body of economic research has discredited” the claim that raising the minimum wage destroys jobs for some low-skilled workers (“How a powerful rightwing lobby is plotting to stop minimum wage hikes,” Feb. 20).

First, your report presents a wholly misleading account of the current state of research.  As economists Jonathan Meer (of Texas A&M) and Jeremy West (of M.I.T.) wrote just last month in a revised version of a well-respected paper, “[t]he voluminous literature on minimum wages offers little consensus on the extent to which a wage floor impacts employment.”*  Profs. Meer and West, justly critical of the shortness of the time spans examined by ‘pro’-minimum-wage studies, then present powerful evidence that minimum wages do in fact over several years slow job growth for low-skilled workers. 
Second, your claims on behalf of the minimum wage are specious on their face.  If you really believe that “employment expands with wages,” you should also believe, say, that newspaper advertising expands with rates.  The fact that you likely understand that newspaper advertising would fall if government were to force all newspapers to arbitrarily hike the advertising rates they charge makes mysterious your failure to understand that employment falls when government forces workers to arbitrarily hike the wage rates they charge."

Punishing the rich is not the answer to inequality, Nobel laureate Christopher Pissarides says

From the Guardian. Excerpts:
"Governments should combat inequality by using their tax revenues to create jobs, rather than simply redistribute money from the rich to the poor, a leading economist said this week in Davos.

Christopher Pissarides, professor of economics at the London School of Economics, told the World Economic Forum annual meeting that citizens around the world suffer extreme inequality, but punishing people on high incomes is not the answer.

“I don’t think taxing high incomes and simply taking the money and passing it on as transfers to lower incomes can work in today’s open globalised world,” Pissarides, who won the Nobel prize for economics in 2010, said in a briefing on income inequality.

Redistribution takes away the incentive for lower-skilled people to acquire skills and go into the labour market, he argued, and creates disincentives for higher earners to stay in the country, work hard and look for new ventures to make money.

Instead, he called for governments to use more imaginative ways of rebalancing incomes by creating more and better jobs at the lower end and investing in better education."


“I think we’d be doing better by emphasising ways of reducing poverty rather than by sensationalising the issue by saying how much the very rich people are worth.”"

Saturday, February 21, 2015

8 Goofs in Jonathan Gruber’s Health Care Reform Book

This Obamacare architect’s propaganda piece is a comic of errors

By MATT PALUMBO of FEE.

"In one of life’s bitter ironies, I recently found a book by Jonathan Gruber in the bin of a bookstore’s going-out-of-business sale. It’s called Health Care Reform: What It Is, Why It’s Necessary, How It Works. Interestingly, the book is a comic, which made it a quick read. It’s just the sort of thing that omniscient academics write to persuade ordinary people that their big plans are worth pursuing.
 
In case you’ve forgotten — and to compound the irony — Gruber is the Obamacare architect who received negative media attention recently for some controversial comments about the stupidity of the average American voter. In Health Care Reform, Gruber focuses mainly on two topics: an attempted diagnosis of the American health care system, and how the Affordable Care Act (the ACA, or Obamacare) will solve them. I could write a PhD thesis on the myriad fallacies, half-truths, and myths propounded throughout the book. But instead, let’s explore eight of Gruber’s major errors.



Error 1: The mandate forcing individuals to buy health insurance is just like forcing people to buy car insurance, which nobody questions.

This is a disanalogy — and an important one. A person has to purchase car insurance only if he or she gets a car. The individual health insurance mandate forces one to purchase health insurance no matter what. Moreover, what all states but three require for cars is liability insurance, which covers accidents that cause property damage and/or bodily injury. Technically speaking, you’re only required to have insurance to cover damages you might impose on others. If an accident is my fault, liability insurance covers the other individual’s expenses, not my own, and vice versa.

By contrast, if the other driver and I each had collision insurance, we would both be covered for vehicle damage regardless of who was at fault. If collision insurance were mandated, the comparison to health insurance might be apt, because, as with health insurance, collision covers damage to oneself. But no states require collision insurance.

Gruber wants to compare health insurance to car insurance primarily because (1) he wants you to find the mandate unobjectionable, and (2) he wants you to think of the young uninsured (those out of the risk pool) as being sort of like uninsured drivers — people who impose costs on others due to accidents.

But not only is the comparison inapt, Gruber’s real goal is to transfer resources from those least likely to need care (younger, poorer people) to those most likely to need care (older, richer people). The only way mandating health insurance could be like mandating liability car insurance is in preventing the uninsured from shifting the costs of emergent care thanks to federal law. We’ll discuss that as a separate error, next.

Error 2: The emergency room loophole is responsible for increases in health insurance premiums.

In 1986, Reagan passed the Emergency Medical Treatment and Active Labor Act, one provision of which was that hospitals couldn’t reject emergency care to anyone regardless of their ability to pay. This act created the “emergency room loophole,” which allows many uninsured individuals to receive care without paying.

The emergency room loophole does, indeed, increase premiums. There is no free lunch. The uninsured who use emergency rooms can’t pay the bills, and the costs are thus passed on to the insured. So why do I consider this point an error? Because Gruber overstates its role in increasing premiums. “Ever wonder why your insurance premiums keep going up?” he asks rhetorically, as if this loophole is among the primary reasons for premium inflation.

The reality is, spending on emergency rooms (for both the uninsured and the insured) only accounts for roughly 2 percent of all health care spending. Claiming that health insurance premiums keep rising due to something that accounts for 2 percent of health care expenses is like attributing the high price of Starbucks drinks to the cost of their paper cups.

Error 3: Medical bills are the No.1 cause of individual bankruptcies. 

Gruber doesn’t include a single reference in the book, so it’s hard to know where he’s getting his information. Those lamenting the problem of medical bankruptcy almost always rely on a 2007 study conducted by David Himmelstein, Elizabeth Warren, and two other researchers. The authors offered the shocking conclusion that 62 percent of all bankruptcies are due to medical costs.

But in the same study, the authors also claimed that 78 percent of those who went bankrupt actually had insurance, so it would be strange for Gruber to claim the ACA would solve this problem. While it would be unfair to conclude definitively that Gruber relied on this study for his uncited claims, it is one of the only studies I am aware of that could support his claim.

More troublingly, perhaps, a bankruptcy study by the Department of Justice — which had a sample size five times larger than Himmelstein and Warren’s study — found that 54 percent of bankruptcies have no medical debt, and 90 percent have debt under $5,000. A handful of studies that contradict Himmelstein and Warren’s findings include studies by Aparna Mathur at the American Enterprise Institute; David Dranove and Michael Millenson of Northwestern University; Scott Fay, Erik Hurst, and Michelle White (at the universities of Florida, Chicago, and San Diego, respectively); and David Gross of Compass Lexecon and Nicholas Souleles of the University of Pennsylvania.

Why are Himmelstein and Warren’s findings so radically different? Aside from the fact that their study was funded by an organization called Physicians for a National Health Program, the study was incredibly liberal about what it defined as a medical bankruptcy. The study considered any bankruptcy with any amount of medical debt as a medical bankruptcy. Declare bankruptcy with $100,000 in credit card debt and $5 in medical debt? That’s a medical bankruptcy, of course. In fact, only 27 percent of those surveyed in the study had unreimbursed medical debt exceeding $1,000 in the two years prior to declaring bankruptcy.

David Dranove and Michael L. Millenson at the Kellogg School of Management reexamined the Himmelstein and Warren study and could only find a causal relationship between medical bills and bankruptcy in 17 percent of the cases surveyed. By contrast, in Canada’s socialized medical system, the percentage of bankruptcies due to medical expenses is estimated at between 7.1 percent and 14.3 percent. One wonders if the Himmelstein and Warren study was designed to generate a narrative that self-insurance (going uninsured) causes widespread bankruptcy.

Error 4: 20,000 people die each year because they don’t have the insurance to pay for treatment.

If the study this estimate was based on were a person, it could legally buy a beer at a bar. Twenty-one years ago, the American Medical Association released a study estimating the mortality rate of the uninsured to be 25 percent higher than that of the insured. Thus, calculating how many die each year due to a lack of insurance is determined by the number of insured and extrapolating from there how many would die in a given year with the knowledge that they’re 25 percent more likely to die than an insured person.

Even assuming that the 25 percent statistic holds true today, not all insurance is equal. As Gruber notes on page 74 of his book, the ACA is the biggest expansion of public insurance since the creation of Medicare and Medicaid in 1965, as 11 million Americans will be added to Medicaid because of the ACA. So how does the health of the uninsured compare with those on Medicaid? Quite similarly. As indicated by the results from a two-year study in Oregon that looked at the health outcomes of previously uninsured individuals who gained access to Medicaid, Medicaid “generated no significant improvement in measured physical health outcomes.” Medicaid is more of a financial cushion than anything else.

So with our faith in the AMA study intact, all that would happen is a shift in deaths from the “uninsured” to the “publicly insured.” But the figure is still dubious at best. Those who are uninsured could also suffer from various mortality-increasing traits that the insured lack. As Megan McArdle elaborates on these lurking third variables,

Some of the differences we know about: the uninsured are poorer, more likely to be unemployed or marginally employed, and to be single, and to be immigrants, and so forth. And being poor, and unemployed, and from another country, are all themselves correlated with dying sooner.

Error 5: The largest uninsured group is the working poor.

Before Obamacare, had you ever heard that there are 45 million uninsured Americans? It’s baloney. In 2006, 17 million of the uninsured had incomes above $50,000 a year, and eight million of those earned more than $75,000 a year. According to one estimate from 2009, between 12 million and 14 million were eligible for government assistance but simply hadn’t signed up. Another estimate from the same source notes that between 9 million and 10 million of the uninsured are not American citizens. According to the Centers for Disease Control and Prevention, slightly fewer than 8 million of the uninsured are aged 18–24, the group that requires the least amount of medical care and has an average annual income of slightly more than $30,000.

Thus, the largest group of uninsured is not the working poor. It’s the middle class, upper middle class, illegal immigrants, and the young. The working poor who are uninsured are often eligible for assistance but don’t take advantage of it. I recognize that some of these numbers may seem somewhat outdated (the sources for all of them can be found here), but remember: we’re taking account of the erroneous ways Gruber and Obamacare advocates sold the ACA to “stupid” Americans.

Error 6: The ACA will have no impact on premiums in the short term, according to the CBO.

Interesting that there’s no mention of what will happen in the long run. Regardless, not only have there already been premium increases, one widely reported consequence of the ACA has been increases in deductibles. If I told you that I could offer you an insurance plan for a dollar a year, it would seem like a great deal. If I offered you a plan for a dollar a year with a $1 million deductible, you might not think it’s such a great deal.

A report from PricewaterhouseCoopers’ Health Research Institute found that the average cost of a plan sold on the ACA’s exchanges was 4 percent less than the average for an employer-provided plan with similar benefits ($5,844 vs. $6,119), but the deductibles for the ACA plans were 42 percent higher ($5,081 vs. $3,589). The ACA is thus able to swap one form of sticker shock (high premiums) for another (high deductibles). Let us not forget that the ACA exchanges receive federal subsidies. Someone has to pay for those, too.

Error 7: A pay-for-performance model in health care would increase quality and reduce costs. 

This proposal seems like common sense in theory, but it’s questionable in reality. Many conservatives and libertarians want a similar model for education, so some might be sympathetic to this aspect of Gruber’s proposal. But there is enormous difficulty in determining how we are to rank doctors.

People respond to incentives, but sometimes these incentives are perverse. Take the example of New York, which introduced a system of “scorecards” to rank cardiologists by the mortality rates of their patients who received coronary angioplasty, a procedure to treat heart disease. Doctors paid attention to their scorecards, and they obviously could increase their ratings by performing more effective surgeries. But as Charles Wheelan noted in his book Naked Statistics, there was another way to improve your scorecard: refuse surgeries on the sickest patients, or in other words, those most likely to die even with care. Wheelan cites a survey of cardiologists regarding the scorecards, where 83 percent stated that due to public mortality statistics, “some patients who might benefit from angioplasty might not receive the procedure.”

Error 8: The ACA “allows you to keep your current policy if you like it… even if it doesn’t meet minimum standards.”

What, does this guy think we’re stupid or something?"