Wednesday, January 31, 2018

Planet-wide, inequality is plummeting

By Bjorn Lomborg. Excerpts:

"globally, income distribution is less unequal than it has been for 100 years.

The best data on this comes from Professor Branko Milanovic, formerly of the World Bank, now at City University of New York. His research shows that, mostly because of Asia’s incredible growth, global inequality has declined sharply for several decades, reducing so much that the world hasn’t been this equal for more than a century.

Moreover, the conversation on inequality sparked by Oxfam fails to acknowledge that equality is about much more than money. Look at education and health. In 1870, more than three-quarters of the world was illiterate. Today, more than four out of every five people can read.

Half of all of humanity’s welfare gains from the past 40 years come from the fact that we’re living longer, healthier lives. In 1900, people lived to be 30 on average; today, it’s 71. Over the past half-century, the difference in life expectancy between the world’s wealthiest and poorest countries has dropped from 28 to 18 years.

Oxfam almost entirely glosses over this reality, and instead points to wealth levels within individual countries. It’s true inequality on this measure has increased. But Oxfam overstates the case when it claims that the wealth of the world’s 42 richest people is greater than the bottom 50 percent of the planet (3.7 billion).

A little less than one-fifth of the “bottom half” are actually people with a collective debt of $1.2 trillion: likely mostly rich world citizens, like students with loans or people with negative equity in their houses. It is quite a stretch to classify such people among the world’s poor.

It would be fairer, then, to say that the wealth of the poorest 40 percent of the planet (excluding those with negative wealth) is equal to the wealth of the top 128 billionaires. But this wouldn’t be as catchy as claiming that just 42 people own as much as half the planet.

Oxfam’s repeated claim that the top 1 percent own more than half the planet’s wealth lacks historical context. Thomas Piketty looked at wealth for select countries and found a dramatic decline in the wealth of the top 1 percent from 1900 to about 1970-80, and a smaller increase since then. Thus, it’s likely that the world is more equal today in terms of wealth than it has been historically, apart from over the past three or four decades.

Looking at the United Kingdom for example, the top 1 percent of wealth has increased, yet the data show that the country was still more unequal every year before 1977.

More relevant than wealth, though, is the measure of income inequality, since this determines our lives from one year to the next. Inequality has indeed risen recently. But what of the bigger picture? Perhaps unsurprisingly, most diagrams used by Oxfam start in around 1980, at the historic low-point for income inequality.

The data show that the top 1 percent of income in English-speaking countries has returned to levels akin to those in the early 1900s, while in non-English countries it has declined dramatically."

Leaving Nafta Would Cost $50 Billion a Year

America’s gross domestic product is now 0.2% to 0.3% larger than it would be without the agreement

By Matthew J. Slaughter. He is dean of the Tuck School of Business at Dartmouth College. Excerpts:
"In a new report canvassing dozens of academic and policy studies, I find that the U.S. gross domestic product is now 0.2% to 0.3% larger than it would be without Nafta, a yearly boost of about $50 billion.

When U.S.-based multinational companies expand in Mexico and Canada, the result is often more jobs and higher wages back home. These “foreign” investments tend to complement, not replace, U.S. operations. A 2014 Peterson Institute study found that a 10% increase in employment at a U.S. multinational’s Mexican affiliate leads to a 1.3% increase in employment, a 1.7% increase in exports, and a 4.1% increase in research spending in the stateside parent company.

Nafta has helped America’s small businesses, too. In 2014, more than 125,000 small businesses exported $136 billion to Canada or Mexico. That is 25% of all U.S. small-business exports. Not only has Nafta increased the size of American workers’ paychecks, it has helped them stretch those paychecks further. American consumers have saved $10.5 billion a year from lower tariffs under Nafta, with most of the benefits going to households with annual incomes below $70,000.

Consider the delicious case of avocados. For 80 years before Nafta, the U.S. banned all imports of Mexican avocados. The ban was initially relaxed under Nafta and lifted altogether in 2007. U.S. avocado imports surged 2,214% from 1992 to 2012. Yet the overall U.S. market was growing so rapidly that U.S. avocado production rose, not fell. In California, the number of avocado orchards increased from 4,801 in 2002 to 5,602 in 2012. Many U.S. producers have established a high-end niche, with U.S. varieties commanding a price premium.

Withdraw from Nafta, and all its gains would be permanently lost. For U.S. companies and their workers, new barriers to trade and investment would limit access to foreign markets, dull additional innovation and investment, and weaken their supply networks. A Business Roundtable study released this month estimates that U.S. GDP would shrink by at least 0.6%—about $120 billion a year—in the initial post-exit years, with U.S. exports down more than 2%. This drop in output and exports would initially destroy more than a million U.S. jobs across all 50 states.

Think that “tougher” domestic-content rules post-Nafta would help American car manufacturers? Think again. Withdrawing from Nafta could cost the automobile industry more than 20,000 jobs—plus nearly 50,000 auto-parts jobs—while adding $330 to $440 to the cost of every new vehicle sold in America. The idea that more domestic content per vehicle means more domestic jobs ignores that uncompetitive companies make fewer cars."

Tuesday, January 30, 2018

Eliminating the mortgage tax deduction could boost homeownership

Tyler Cowen.
"(3) Implications of US Tax Policy for House Prices, Rents, and Homeownership
Kamila Sommer and Paul Sullivan
This paper studies the impact of the mortgage interest tax deduction on equilibrium house prices, rents, homeownership, and welfare. We build a dynamic model of the housing market that features a realistic progressive tax system in which owner-occupied housing services are tax-exempt and mortgage interest payments are tax-deductible. We simulate the effect of tax reform on the housing market. Eliminating the mortgage interest deduction causes house prices to decline, increases homeownership, decreases mortgage debt, and improves welfare. Our findings challenge the widely held view that repealing the preferential tax treatment of mortgages would depress homeownership.
Here is the link to the AER piece."

Police Union Privileges, Officer Misconduct and Systems Thinking

From Alex Tabarrok.
"In Police Union Privileges I explained how union contracts and police bill of rights give police officers privileges not afforded to regular people. What differences do these privileges make? A new paper, The Effect of Collective Bargaining Rights on Law Enforcement: Evidence from Florida, suggests that police union privileges significantly increase the rate of officer misconduct:
Growing controversy surrounds the impact of labor unions on law enforcement behavior. Critics allege that unions impede organizational reform and insulate officers from discipline for misconduct. The only evidence of these effects, however, is anecdotal. We exploit a quasi-experiment in Florida to estimate the effects of collective bargaining rights on law enforcement misconduct and other outcomes of public concern. In 2003, the Florida Supreme Court’s Williams decision extended to county deputy sheriffs collective bargaining rights that municipal police officers had possessed for decades. We construct a comprehensive panel dataset of Florida law enforcement agencies starting in 1997, and employ a difference-in-difference approach that compares sheriffs’ offices and police departments before and after Williams. Our primary result is that collective bargaining rights lead to about a 27% increase in complaints of officer misconduct for the typical sheriff’s office. This result is robust to the inclusion of a variety of controls. The time pattern of the estimated effect, along with an analysis using agency-specific trends, suggests that it is not attributable to preexisting trends. The estimated effect of Williams is not robustly significant for other potential outcomes of interest, however, including the racial and gender composition of agencies and training and educational requirements.
This is important research but although I’m not surprised that collective bargaining rights lead to more misconduct I do find the size of the effect implausibly large. One reason is that police union privileges are only one brick in the blue wall. Juries, for example, often fail to convict police even when faced with video evidence that would be overwhelming in any other context [e.g. Philando Castile]. Police union privileges are unjust and should be abolished but solving the problems with policing requires more than a change in naked incentives.

To solve this problem we need to adopt the same kind of systems wide thinking that has led to large reductions in fatal accidents in anesthesiology, airplane crashes, and nuclear accidents. Criminologist Lawrence Sherman writes:
The central point Perrow (1984) made in defining the concept of system accidents is that the urge to blame individuals often obstructs the search for organizational solutions. If a system-crash perspective can help build a consensus that many dimensions of police systems need to be changed to reduce unnecessary deaths (not just but certainly including firing or prosecuting culpable shooting officers), police and their constituencies might start a dialog over the details of which system changes to make. That dialog could begin by describing Perrow’s central hypothesis that the interactive complexity of modern systems is the main target for reform. From the 1979 nuclear power plant near-meltdown at Three Mile Island in Pennsylvania to airplane and shipping accidents, Perrow shows how the post-incident reviews rarely identify the true culprit: It is the complexity of the high-risk systems that causes extreme harm. Similarly, fatal police shootings shine the spotlight on the shooter rather than on the complex organizational processes that recruited, hired, trained, supervised, disciplined, assigned, and dispatched the shooter before anyone faced a split-second decision to shoot."

Monday, January 29, 2018

Why San Francisco has the second-highest construction costs in the world

By Roland Li. He is a reporter for San Francisco Business Times. Excerpts:
"San Francisco has the world's second-highest construction costs because of complex, burdensome approvals, a severe labor shortage and easy paths for opponents to delay projects, according to a new report.

The city's average construction costs of $330 per square foot was second only to New York, according to a study last year by Turner and Townsend, a construction consultant. Apartments cost around $425,000 per unit to build, exacerbating the region's housing crisis by requiring high rents or massive public subsidies to make construction feasible.

UC Berkeley's Terner Center for Housing Innovation surveyed developers, contractors, architects and nonprofits building market-rate and affordable residential projects on why costs are so high. Respondents said that city agencies have a complex and unwieldy permitting process, noting “additional hoops and requirements seem to pop up at various stages in the process." They also pointed out that building inspections that aren't standardized and lack of coordination between departments adds time to the process."

"Design requirements such as facade aesthetics, balcony spaces and more expensive materials were also cited by the Terner Center as cost burdens. Those requirements are particularly challenging for affordable projects, which rely on public subsidies to be financially feasible and can't charge high rents to cover extra costs. As a result of the requirements, some affordable projects have had to reduce unit counts, according to the report."

"Another cost escalation is opposition to projects, with almost every major San Francisco development vulnerable to an appeal to the Board of Supervisors or Board of Appeals. The Terner Center noted that while few appeals result in a project getting rejected, they add more delays. Some major projects such as 5M, the Warriors Arena and Treasure Island have also been sued, tying up construction for a year or more."

Are minimum markup laws still necessary to prevent big chains from using their economies of scale to drive small retailers out of business?

See These Prices Are a Steal—and in Some States, That’s Illegal:When Meijer opened two stores in Wisconsin, the state demanded it charge more for dog food by C.J. Szafir and Patrick Gleason. Mr. Szafir is executive vice president at the Wisconsin Institute for Law and Liberty. Mr. Gleason is director of state affairs at Americans for Tax Reform.
"In the 1930s, many states tried to ward off economic collapse by barring businesses from selling goods below cost. The idea was that minimum markups would soften price competition and keep companies afloat. But almost 90 years after the stock crash of Black Tuesday, these laws are just propping up Overpriced Wednesdays.

Some consumer advocates argue that minimum markups are still necessary to prevent big chains from using their economies of scale to drive small retailers out of business. This claim was debunked last year in a study by Will Flanders, research director at the Wisconsin Institute for Law and Liberty, and Ike Brannon, a fellow at the Cato Institute. After examining data from all 50 states, they concluded that there is no causal relationship between minimum-markup laws and the number of small businesses. So-called mom-and-pop retailers are doing just fine in states that do not have these laws on the books.

But minimum markups do hurt consumers, since they act as a hidden tax that disproportionately harms poor and middle-income households. Wisconsin’s markup law increases the price of back-to-school supplies, such as books, markers and crayons, by 12% to 146% compared with neighboring states, according to a study last year by the MacIver Institute, a conservative think tank.

In a free market, which thrives on price competition, there is nothing wrong with selling goods below cost. Businesses do this all the time in other states on Black Friday or during “back to school” sales. The goal might be to move inventory, minimize losses or encourage repeat customers."

Sunday, January 28, 2018

Sorry to tell you this but Angus Deaton has gone horribly wrong on US poverty

By Tim Worstall.
"Fighting words from mere policy wonks to a Nobel Laureate of course but we're afraid it's true, Angus Deaton is going badly wrong in his analysis of US poverty. The claim is that there are those in the US suffering the sort of poverty, that $1.90 a day type, we more normally associate with what Donald Trump described as "shitholes."

This is not true. What is true is that there have been a number of reports, books, screeds, making the assertion but they don't stand up to analysis.

The first that Deaton mentions is Philip Alston's UN report on the subject. One of us corresponded with him to discuss his report and we didn't get any useful answers, just "gosh this measuring poverty thing is difficult, isn't it?" Or, as one of us put it elsewhere:

Just to emphasize this when they talk about child poverty (para 25) we’re told that 18 percent of children live in poverty, 13.3 million. Then in paragraph 29, we’re told that food stamps (SNAP) lift 5 million out of poverty, the EITC another 5 million.
So, the number of children “living in poverty” is not 13.3 million, is it — it’s 3.3 million. That comes out to just 4.5 percent of children “living in poverty,” after the effects of just two of the things we do to reduce poverty.
In their own report, the U.N. is detailing how their claims of the number in poverty in the U.S. are entirely wrong – codswallop in fact.

The measurement of poverty being used is how much poverty would there be if government wasn't doing something. This is not a good measurement of how much poverty there is after government has done its - no doubt wasteful, not very effective but still extant - work.

Deaton also references the Edin and Shaefer work. Our opinion is that this was deliberately constructed to be misleading. For they look at cash income only. Our $1.90 a day figure is consumption, not cash income. Not only does the E&S "work" not include what government does to alleviate poverty, as above, but it also fails to account for consumption from savings.

Imagine, for example, that you were laid off from a job this Friday, start the next one in 10 days on the Monday but one and don't claim unemployment in the meantime. By their measurement system you are on less than $2 a day cash income over that period and thus absolutely poor.

This is a nonsensical method of measuring poverty - except, of course, if you were more interested in a polemic which showed there was absolute poverty in the US.

We thus return to our long articulated insistence. In terms of that $1.90 a day absolute poverty defined by the World Bank there is no such poverty in the US today. It simply does not exist. Any policy based upon the idea that it does is therefore going to be wrong.

Just one numerical example. The average, among those who receive anything at all from the program, food stamp allocation is $29 per week per person. 45 million people receive this slightly more than $4 a day. Food stamps are not included in either of the above two mentioned poverty calculations, neither the UN one nor the Edin and Shaefer. There is no $1.90 a day poverty in the US."

George Will On Steel Tariffs

See When protectionism is not about protecting America at all
"Next, and soon, will come a government decision about the problem, as our protectors see it, of menacingly inexpensive steel imports, concerning which the administration is pretending to deliberate. The charade of thinking will end with the imposition of yet more steel tariffs/taxes, joining the 149 (some as high as 266 percent) already targeting many of the more than 110 countries and territories from which the United States imports steel. Twenty-four of the existing duties target Chinese steel, which is less than 3 percent of U.S. steel imports. America’s supposedly embattled steel industry is producing more than it did during World War II, and every year in this decade more than 10 percent of U.S.-made steel goods has been exported.

Imposition of the new tariffs/taxes will be done solely by the president, exercising discretion granted to presidents by various laws, including one passed in December 1974, when Congress evidently thought that Watergate, then fresh in memory, had taught that presidents were not sufficiently imperial. Then, as now, Congress seemed to think it had more important things to do than set trade policy.

In his new book “Clashing Over Commerce,” Dartmouth economist Douglas A. Irwin explains that the steel industry was a powerful advocate of protectionism until the 1892 opening of Minnesota’s Mesabi iron ore range, which gave steel producers cost advantages that turned their attention to export markets. The industry’s trade problems began when, in July 1959, the United Steelworkers shut domestic steel production down for 116 days — the longest industrial strike in U.S. history — and steel-consuming industries found alternative suppliers and materials. Desperate management purchased labor peace with increased wages that by the 1980s were 95 percent higher than the average in manufacturing, and soon U.S. steel was priced out of foreign markets. Intermittently since then, the industry has sought and received protection.

In 2002, President George W. Bush imposed tariffs that caused steel prices to surge, costing more jobs in steel-using industries than then existed in steel-making. (Today there are upward of seven times more steel-using than steel-making jobs.) The tariffs cost $400,000 a year for every steel-making job saved, and cost $4 billion in lost wages. Especially hard hit in 2002 were three states — Ohio, Michigan, Pennsylvania — that in 2016 voted for today’s protectionist president."

"Most U.S. steel imports come from four important allies: Canada, South Korea, Mexico and Brazil. The coming steel tariffs/taxes will mean that defense dollars will buy fewer ships, tanks and armored vehicles, just as the trillion infrastructure dollars the administration talks about will buy fewer bridges and other steel-using projects."

Saturday, January 27, 2018

Countries in the top quartile of freedom have a much higher average per capita income than those in other quartiles

See An Update on the Global State of Human Freedom by Marian L. Tupy.
"As Editor of Human Progress, I have the pleasure of writing about the improving state of the world. Evidence from individual scholars, academic institutions, and international organisations clearly shows that human conditions are improving – especially in developing countries.
As Steven Pinker, the Johnstone Professor of Psychology at Harvard University writes in his upcoming book, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, “The world has made spectacular progress in every single measure of human well-being.”

Regrettably, progress is not linear and the occasional backwards step is unavoidable. Just think of the two World Wars and various genocides that scarred the 20th century. But, to quote Kevin Kelly, founding Executive Editor of Wired magazine, “Ever since the Enlightenment and the invention of Science, we’ve managed to create a tiny bit more than we’ve destroyed each year. But that few percent positive difference is compounded over decades into what we might call civilization.”   

Moreover, progress is not guaranteed. The world could experience a nuclear conflict or an asteroid strike – either of which has the potential to wipe us all out. Not all threats are existential, of course. In recent years, for example, we have witnessed a sustained attack on political and economic freedoms, as well as freedoms of religion and free expression. Considering that human freedom is an integral part of human progress, these particular developments are worth exploring in greater depth.

The Cato Institute’s Center for Global Liberty and Prosperity, where I work, has been measuring the state of human freedom since 2008 – a veritable annus horribilis that saw the greatest economic crisis since the Great Depression, and gave rise to a range of populist movements and illiberal policies. The 2017 Human Freedom Index, published today, once more observes a general decline in human freedom.
How do the reports authors define freedom? “The contest between liberty and power has been ongoing for millennia. For just as long, it has inspired competing conceptions of freedom,” write Ian Vásquez and Tanja Porčnik, who produced the study. “Freedom in our usage is a social concept that recognizes the dignity of individuals and is defined by the absence of coercive constraint … Freedom thus implies that individuals have the right to lead their lives as they wish as long as they respect the equal rights of others.”
This definition of freedom will be familiar to all those who are aware of Isaiah Berlin’s notion of negative liberty. “In the simplest terms,” the authors note, “negative liberty means noninterference by others. Berlin contrasts that type of liberty with positive liberty, which requires the removal of constraints that impede one’s personal improvement or the fulfillment of his potential as the individual understands it.”

Since negative liberty “comes in only one flavor — the lack of constraint imposed on the individual,” it is more easily measured. As such, the HFI uses 79 distinct indicators of personal and economic freedom in the following areas: rule of law, security and safety, movement, religion, expression and information, identity and relationships, size of government, legal system and property rights, access to sound money, and freedom to trade internationally. It also looks at freedom of association, assembly, and civil society, and regulation of credit, labor, and business.

The 2017 HFI covers 159 countries, with 2015 being the most recent year for which sufficient data are available. On a scale of 0 to 10, where 10 represents more freedom, the average human freedom rating for 159 countries in 2015 was 6.93. Among the countries included in the index, the level of freedom decreased slightly (by 0.05 points) compared to 2014, with 61 countries increasing their ratings and 97 losing ground. Since 2008, the level of global freedom has also fallen slightly (by 0.12 points), with about half of the countries in the index increasing their ratings and half decreasing.

The top five freest jurisdictions are Switzerland, Hong Kong, New Zealand, Ireland, and Australia. The bottom five jurisdictions are Egypt, Yemen, Libya, Venezuela, and Syria. The countries that improved their level of human freedom most since last year’s report are Sierra Leone, Iran, Botswana, Singapore and Suriname. The largest deteriorations in freedom occurred in Burundi, Brunei, Cameroon, Venezuela, and Tajikistan.
Vásquez and Porčnik believe that human freedom and material human progress are related. To give just one example, countries in the top quartile of freedom enjoy a significantly higher average per capita income ($38,871) than those in other quartiles. The average per capita income in the least-free quartile is $10,346. The HFI also finds a strong relationship between human freedom and democracy.

Others may, of course, draw their own conclusions. If, however, all of us agree that freedom is important in and of itself, the slow deterioration of freedom throughout the world is food for thought."

David Henderson On Burger King Net Neutrality

See Burger King Has It Its Way.
"Clemson University economist, and expert on the FCC and net neutrality, Thomas W. Hazlett, wrote the following on Facebook yesterday (he gave me permission to quote) about Burger King's now-famous (infamous) exposition of net neutrality:

Is this The Onion? A Burger King video attempts to explain Net Neutrality regulation. A quick service Whopper costing $25.99 is delivered fast, while a meal you wait for costs $4.99. Oops! Perfectly wrong. Burger King is under no "common carrier" mandate, and is perfectly free to price that way. It would be stupid, and customers would be offended (as in the video, that's the point of the script). So it doesn't happen. No regulation needed. Burger King misunderstands their own explainer video. By the way, Burger King does discriminate among customers. For an extra fee, BK Delivers will take your order to your home. Call it a "fast lane" for the Whopper. Of course, promotional discounts do not apply. And the service varies by area. Don't be offended: not all differential pricing is stupid or anti-consumer. But Burger King should be. On its script, it's violating cheeseburger neutrality.
I'll add two other points:

1. Notice in the video that Burger King employees purposely slow down service to people even though the burger is ready. So Burger King's video makes it look as if the slowness has nothing to do with a capacity constraint. That's not what's happening on the Internet. The reason for low speeds is a capacity constraint.
2. If Burger King really wanted to use its own restaurant as a way to illustrate net neutrality, it would show the employees selling two hamburgers to one customer at the same price as it sells one hamburger to another customer. But then, of course, everyone would see the absurdity in this pricing scheme. Burger King doesn't want them to see. An alternative explanation is that whoever at Burger King wrote the script for its together doesn't understand the issue. Which explanation do you prefer? Have it your way."

Friday, January 26, 2018

More Evidence of the High Collateral Damage of a War on Cash

By Lawrence H. White.

"The leading arguments for banning large-denomination currency notes are those made in a much-cited working paper by Peter Sands and at book length by Kenneth Rogoff. They have been rebutted persuasively by Pierre Lemieux and Jeffrey Hummel in their respective reviews of Rogoff’s book. I have previously offered my own rebuttals here and here.

The justification for returning to the topic now is that two recent reports, issued by the Federal Reserve Bank of San Francisco and by the European Central Bank, provide new evidence on the public’s use of large-denomination notes. This evidence is essential to any serious evaluation of proposals to ban large-denomination in notes in the United States and Europe.

The Sands and Rogoff argument assumes that the users of large bills are almost entirely criminals; use by innocent citizens is rare. Rogoff writes in his book:
The bulk of US cash in circulation cannot be accounted for by consumer surveys. Obviously, if consumers are holding only a small fraction of all cash outstanding, they cannot possibly be holding more than a small fraction of the $100 bills in circulation, since $100 bills account for nearly 80 percent of the value of US currency.
By contrast: “The drug trade is a famously cash-intensive business at every level.”
Peter Sands declares: “Eliminating high denomination notes has limited downside since such notes play such little role in the legitimate economy.” Sands downplays any effect of eliminating large notes on the welfare of non-criminals, those he calls “legitimate” currency hoarders, on the assumption that they are at most a small minority of currency holders, while criminals are the vast majority:
The other arguments for retaining high denomination notes [besides profitability to the issuing government] largely revolve around some individuals’ desire to hoard or save cash “under the bed” given concerns about banks, or the utility of high denomination notes in emergencies, war zones or natural disasters. There probably is some legitimate hoarding, particularly in countries with a history of banking crises, but the reality is that most of the money that is hoarded in cash is kept from the banking system in order to keep its origins from scrutiny. Hoarding cash appears highly correlated with tax evasion. [Legitimate hoarding] can only account for a minute fraction of high denomination notes.
In actual reality, nobody really knows the shares of the stock of large bills held by non-criminal hoarders and by various types of criminals, because people who agree to answer survey questions have every incentive to under-report their holdings, whether acquired lawfully or otherwise. It stands to reason that ordinary citizens who hoard cash, say because they dislike surveillance of their banking activity, or fear a breakdown in banking system functionality for reasons of natural disaster (such as recently happened in Puerto Rico) or political upheaval, are the very people who are least likely to divulge the true size of their hoards to strangers, no matter what assurances of anonymity they receive.

Sands argues that in cases of legally acquired hoards, the welfare loss from banning large notes would be minimal, because “lower denomination notes offer an only slightly more inconvenient solution for ordinary people, given the sums involved. Only the very wealthy would be truly inconvenienced by having to make such a substitution.” But this is a hand-waving argument rather than a factual deduction. To securely hoard any dollar amount in $10 bills rather than $100 bills requires a safety deposit box ten times as large, or buying a lockbox ten times as large to hide at home. It is far from obvious that “only the very wealthy” hoarders would be “truly inconvenienced.”
Directly addressing the concern that large bills have legitimate uses, Sands responds [footnote call omitted]:
Some suggest that high denomination notes play an important role in economic activity. There is little evidence for this assertion. Whilst low denomination notes continue to play a significant role in legitimate economic activity even in the most advanced economies given the transactional convenience they provide, high denomination notes do not.
The new FRBSF and ECB survey evidence is most relevant to assessing claims like this one, allowing us to quantify (if imperfectly) how significant a role high-denomination notes actually play.
Shaun O’Brien’s report on “Preliminary Findings from the 2016 Diary of Consumer Payment Choice” for the San Francisco Fed unfortunately does not break down US currency use by denomination. Nonetheless it has at least three useful takeaways for the “war on cash” debate:
  • “Cash is held and used by a large majority of consumers, regardless of age and income.”
  • “[C]ash was the most, or second most, used payment instrument regardless of household income, indicating that its value to consumers as a payment instrument was not limited to lower income households that may be less likely to have access to an account at a financial institution.”
  • Cash is used to make 8 percent of all payments of $100 or more. We don’t know the mix of denominations used, but this certainly leaves open the possibility that $100 and $50 bills play a significant role in a non-negligible share of legitimate economic activity.
The ECB study by Henk Esselink and Lola Hernández, entitled “The use of cash by households in the euro area,” reports on cash use in all 19 eurozone countries, based on a 2016 survey. Two immediately relevant findings are that many ordinary members of the public store cash for emergency use, and commonly handle even the highest denomination notes:
The study confirms that cash is not only used as a means of payment, but also as a store of value, with almost a quarter of consumers keeping some cash at home as a precautionary reserve. It also shows that more people than often thought use high denomination banknotes; almost 20% of respondents reported having a €200 or €500 banknote in their possession in the year before the survey was carried out. […]
Of those respondents who acknowledged that they put cash aside, only 23 percent kept €100 or less.  22 percent kept between €101 and €250, 19 percent between €251 and €500, 15 percent between €500 and €1000, and 12 percent more than €1000. In addition 10 percent refused to specify the amount. If we assume conservatively that the non-specifiers were distributed in the same proportions as the specifiers, then those who kept more than €100 as a precautionary reserve comprised about 75 percent of respondents (almost 25 percent of those surveyed) who reported keeping cash in reserve. Thus about 18 percent of the Eurozone population has cash holdings large enough that they may benefit from using notes of €100 and above merely as a compact means of storing wealth.
By contrast with the US figure of 8 percent cash among payments over $100, Europeans use cash to make 32 percent of payments over €100. Such payments, the authors report,
amounted to 10% of the value of all cash payments at the POS [point of sale] in the euro area. The share of cash payments above €100 in the total value of cash payments at the POS was wide-ranging, from 3% in France or 5% in Belgium, to 21% in Ireland and Slovenia or 26% in Greece.
These numbers do not indicate to me that law-abiding cash use is negligible — but armed with the figures, the reader can make his or her own judgement about what level of cash use counts as non-negligible.

I want to add a somewhat tangential but related additional comment: Besides making the debatable quantitative assumption that law-abiding cash use is negligibly small, Sands and Rogoff also make a normative assumption that strongly tilts their seemingly neutral estimates of overall welfare effects. They assume that the welfare of people who use cash for illicit purposes doesn’t count, while disrupting their operations by banning large notes is pure benefit to the rest of us.

As Lemieux, Hummel, and also David Henderson have noted, however, an economic analyst may justifiably distinguish, among the set Sands lumps together as “financial criminals,” those actors who violate personal and property rights (kidnappers, thieves and fences, extortionists, terrorists) from those whose illicit activity consists of peacefully trading in illicit goods and services (drug dealers, sex workers, and the like). The first group clearly generates negative-sum outcomes, while the second group generates positive-sum outcomes — mutual gains from trade — from the point of view of the participants. Taking the point of view of the participants is the standard approach in modern welfare economics. The principle of gains from trade — gains from capitalist acts between consenting adults — applies to drug sales and sex work despite their illicit status in many jurisdictions. Banning high-denomination notes in order to raise the cost of such trades means reducing the economic welfare of the participants in those markets. To the extent that the main illicit use of high-denomination notes is in victimless markets, a policy to suppress their use is harmful rather than beneficial from this perspective

Cases of people who make or take illicit bribes, pursuing this logic, have to be sorted between trade-enhancing bribes and trade-restricting bribes. Making bribery more costly is not an unmixed blessing if without certain bribes the economy fails to function as smoothly. It likewise cannot be taken for granted that all tax evasion reduces overall economic welfare once it is recognized that some taxes may be too high from a Kaldor-Hicks efficiency standpoint, meaning at a level where their marginal deadweight loss (the uncaptured gains due to tax-blocked trades) exceeds the net gains from the government projects they finance."

How Taxes Distort The Way Buildings Were Once Built

By architect Kurt Kohlstedt of 99percentinvisible. It has some great pictures. Excerpts.
"Pictures of Paris tend to show off key architectural features of the city, like the Eiffel Tower, the Arc de Triomphe, the Notre-Dame Cathedral or, in the case of more everyday buildings: mansard roofs. Punctured by dormer windows, these steeply sloped roofs are an iconic part of the local vernacular. And their ubiquity is not just driven by aesthetics, but also a history of height limitations in the City of Light.

In 1783, Paris implemented a 20-meter (roughly 65 feet) restriction on structures, with a crucial caveat: the limit was based on measuring up to the cornice line, leaving out the roof zone above.
Naturally, land owners seeking to optimize their habitable space responded by building up mansard roofs. Later window-based taxes offset some of the financial incentive behind this design strategy, but in 1902, an expansion of the law allowed up to four additional floors to be built using the roof-related loophole, helping to re-expand its utility. Similar restrictions in other places helped the mansard style spread beyond Paris as well.

A 1916 zoning resolution in New York City, for instance, called for setbacks on tall buildings. Mansard roofs represented an ideal design choice, practical but also associated with Parisian culture. 

Dutch canal houses are another classic example of how rules and regulations can shape structures. Taxed on their canal frontage rather than height or depth, these buildings grew in tall and thin. In turn, this typology evolved narrower staircases, necessitating exterior hoist systems to move furniture and goods into and out of upper floors.

Meanwhile, across Europe — including England, Wales, Scotland, Ireland and France — a history of “window taxes” also reshaped the built environment, albeit in less aesthetically pleasing ways. Mainly, these levies resulted in owners bricking up windows to avoid the tax.

Window taxes date back as far as 1696, introduced by English King William III as an alternative to income taxes for the wealthy. Houses in England were taxed by unit at a flat rate, then an additional rate if they had over 10 windows (then more again if they went over 20 or 30). And though they were repealed in most places well over a century ago, the legacy of bricked-up windows remains on many old structures.

These kinds of external factors can shape the interiors of structures, too. As far back as the 9th century (and possibly earlier), “hearth taxes” were used by the Byzantine Empire as a proxy for family units, levied based on the number of fireplaces in a municipality. However, later versions of this tax were known to cause some unintended consequences. For instance, a baker in the 1600s broke through his back wall to use the neighbor’s chimney and avoid a hearth tax. A resulting fire destroyed 20 homes and killed four people."

"The list of true examples is long, with even small-seeming taxes shaping the fundamental building blocks of structures as well as materials used in everyday decor. Great Britain is rich with such tax-impacted design history.
In 1712, a tax was introduced on patterned, printed and painted wallpaper, for instance, leading people to buy plain paper and stencil designs on it (thus avoiding the tax).

A few decades later the imposition of a weight-based glass tax led to the production of smaller as well as more hollow-stemmed glassware (often referred to as “excise glasses”).

Introduced in 1784, a per-thousand-brick tax led to the creation of larger bricks (eventually held in check by new legislation about how big a single brick could be). To this day, historians can use brick size to help date the construction of different buildings around Britain."

Thursday, January 25, 2018

Prosperity and Taxation: What Can We Learn from the 1920s?

By Dan Mitchell.

"Last November, I wrote about the lessons we should learn from tax policy in the 1950s and concluded that very high tax rates impose a very high price.

About six months before that, I shared lessons about tax policy in the 1980s and pointed out that Reaganomics was a recipe for prosperity.

Now let’s take a look at another decade.

Amity Shlaes, writing for the City Journal, discusses the battle between advocates of growth and the equality-über-alles crowd.
…progressives have their metrics wrong and their story backward. The geeky Gini metric fails to capture the American economic dynamic: in our country, innovative bursts lead to great wealth, which then moves to the rest of the population. Equality campaigns don’t lead automatically to prosperity; instead, prosperity leads to a higher standard of living and, eventually, in democracies, to greater equality. The late Simon Kuznets, who posited that societies that grow economically eventually become more equal, was right: growth cannot be assumed. Prioritizing equality over markets and growth hurts markets and growth and, most important, the low earners for whom social-justice advocates claim to fight.
Amity analyzes four important decades in the 20th century, including the 1930s, 1960s, and 1970s.
Her entire article is worth reading, but I want to focus on what she wrote about the 1920s. Especially the part about tax policy.

She starts with a description of the grim situation that President Harding and Vice President Coolidge inherited.
…the early 1920s experienced a significant recession. At the end of World War I, the top income-tax rate stood at 77 percent. …in autumn 1920, two years after the armistice, the top rate was still high, at 73 percent. …The high tax rates, designed to corral the resources of the rich, failed to achieve their purpose. In 1916, 206 families or individuals filed returns reporting income of $1 million or more; the next year, 1917, when Wilson’s higher rates applied, only 141 families reported income of $1 million. By 1921, just 21 families reported to the Treasury that they had earned more than a million.
Wow. Sort of the opposite of what happened in the 1980s, when lower rates resulted in more rich people and lots more taxable income.
But I’m digressing. Let’s look at what happened starting in 1921.
Against this tide, Harding and Coolidge made their choice: markets first. Harding tapped the toughest free marketeer on the public landscape, Mellon himself, to head the Treasury. …The Treasury secretary suggested…a lower rate, perhaps 25 percent, might foster more business activity, and so generate more revenue for federal coffers. …Harding and Mellon got the top rate down to 58 percent. When Harding died suddenly in 1923, Coolidge promised to “bend all my energies” to pushing taxes down further. …After winning election in his own right in 1924, Coolidge joined Mellon, and Congress, in yet another tax fight, eventually prevailing and cutting the top rate to the target 25 percent.
And how did this work?
…the tax cuts worked—the government did draw more revenue than predicted, as business, relieved, revived. The rich earned more than the rest—the Gini coefficient rose—but when it came to tax payments, something interesting happened. The Statistics of Income, the Treasury’s database, showed that the rich now paid a greater share of all taxes. Tax cuts for the rich made the rich pay taxes.
To elaborate, let’s cite one of my favorite people. Here are a couple of charts from a study I wrote for the Heritage Foundation back in 1996.

The first one shows that the rich sent more money to Washington when tax rates were reduced and also paid a larger share of the tax burden.



And here’s a look at the second chart, which illustrates how overall revenues increased (red line) as the top tax rate fell (blue).



So why did revenues climb after tax rates were reduced?

Because the private economy prospered. Here are some excerpts about economic performance in the 1920s from a very thorough 1982 report from the Joint Economic Committee.
Economic conditions rapidly improved after the act became law, lifting the United States out of the severe 1920-21 recession. Between 1921 and 1922, real GNP (measured in 1958 dollars) jumped 15.8 percent, from $127.8 billion to $148 billion, while personal savings rose from $1.59 billion to $5.40 -billion (from 2.6 percent to 8.9 percent of disposable personal income). Unemployment declined significantly, commerce and the construction industry boomed, and railroad traffic recovered. Stock prices and new issues increased, with prices up over 20 percent by year-end 1922.8 The Federal Reserve Board’s index of manufacturing production (series P-13-17) expanded 25 percent. …This trend was sustained through much of 1923, with a 12.1 percent boost in GNP to $165.9 billion. Personal savings increased to $7.7 billion (11 percent of disposable income)… Between 1924 ‘and 1925 real GNP grew 8.4 percent, from $165.5 billion to $179.4 billion. In this same period the amount of personal savings rose from an already impressive $6.77 billion to about $8.11 billion (from 9.5 percent to 11 percent of personal disposable income). The unemployment rated dropped 27.3 percents interest rates fell, and railroad traffic moved at near record levels. From June 1924 when the act became law to the end of that year the stock price index jumped almost 19 percent. This index increased another 23 percent between year-end 1924 and year-end 1925, while the amount of non-financial stock issues leapt 100 percent in the same period. …From 1925 to 1926 real GNP grew from $179.4 billion to $190 billion. The index of output per man-hour increased and the unemployment rate fell over 50 percent, from 4.0 percent to 1.9 percent. The Federal Reserve Board’s index of manufacturing production again rose, and stock prices of nonfinancial issues increased about 5 percent.
Now for some caveats.

I’ve pointed out many times that taxes are just one of many policies that impact economic performance.

It’s quite likely that some of the good news in the 1920s was the result of other factors, such as spending discipline under both Harding and Coolidge.

And it’s also possible that some of the growth was illusory since there was a bubble in the latter part of the decade. And everything went to hell in a hand basket, of course, once Hoover took over and radically expanded the size and scope of government.

But all the caveats in the world don’t change the fact that Americans – both rich and poor – immensely benefited when punitive tax rates were slashed."

Charter Schools Lead To Higher Student Achievement In Nearby Public Schools

See Diverted Educational Resources = Higher Student Achievement? by Corey A. DeAngelis of Cato. 
"Education scholars such as Richard Kahlenberg from The Century Foundation claim that since school choice programs “divert important resources away from the public schools,” children left behind in traditional public schools could be negatively impacted academically. However, a peer-reviewed study recently released by Temple University professor Sarah Cordes finds that charter school competition actually improves student achievement in nearby traditional public schools in the nation’s largest school district—New York City.

Specifically, Cordes finds that attending a traditional public school within a mile of a charter school in NYC increases student achievement in math and reading by about 0.015 standard deviations, or around 11 days of additional learning in both subjects. The detected effects increase with the proximity of the public charter school competition.

But why does this happen?

Residentially assigned public schools only lose funding if families are able to exit them for an alternative private or public educational option. If a traditional public school leader knows that their educational institution could be financially harmed by the choices of individual families, they will have a strong incentive to cater to the needs of their students. Since parents care about the academic success of their children, public school leaders will need to focus on turning educational resources into vital lifelong outcomes when faced with competitive pressures.

Although these findings may surprise those that listen to the frequent claims made by public education monopolists, they should not surprise social scientists. This study only adds to the abundance of the evidence existing on the topic that points in the same direction.

Prior Scientific Evidence

As shown in Table 1 below, 23 of 24 such prior evaluations find that competitive pressures from private school choice programs improve the test scores of students left behind in traditional public schools. One study did not find any statistically significant competitive effects.

Table 1: Effects of School Choice Competition on Public School Test Scores 
 

Note: Green boxes indicate that the study found statistically significant positive effects on student test scores in traditional public schools. Yellow boxes indicate that no statistically significant effects were found.

Another peer-reviewed systematic examination of the scientific evidence finds the same conclusion: 20 of 21 reviewed studies indicate that private school choice programs improve the achievement of students that are left behind in their assigned public schools. No studies found negative effects.

Public school leaders that are currently able to compel families to pay for their educational services, nearly regardless of quality or price levels, have an obvious interest in preserving the existing public school monopoly. Rather than listen to the propaganda disseminated by those in power, we should embrace rational theory and the evidence produced by the only thing that can allow us to approach truth: the scientific method."

Wednesday, January 24, 2018

Right-to-work laws reduce Democratic Presidential vote shares

See Right to Work Works by David Henderson of EconLog.

"Labor unions play a central role in the Democratic party coalition, providing candidates with voters, volunteers, and contributions, as well as lobbying policymakers. Has the sustained decline of organized labor hurt Democrats in elections and shifted public policy? We use the enactment of right-to-work laws--which weaken unions by removing agency shop protections-- to estimate the effect of unions on politics from 1980 to 2016. Comparing counties on either side of a state and right-to-work border to causally identify the effects of the state laws, we find that right-to-work laws reduce Democratic Presidential vote shares by 3.5 percentage points. We find similar effects in US Senate, US House, and Gubernatorial races, as well as on state legislative control. Turnout is also 2 to 3 percentage points lower in right-to-work counties after those laws pass. We next explore the mechanisms behind these effects, finding that right-to-work laws dampen organized labor campaign contributions to Democrats and that potential Democratic voters are less likely to be contacted to vote in right-to-work states. The weakening of unions also has large downstream effects both on who runs for office and on state legislative policy. Fewer working class candidates serve in state legislatures and Congress, and state policy moves in a more conservative direction following the passage of right-to-work laws.
This is from James Feigenbaum, Alexander Hertel-Fernandez, and Vanessa Williamson, "From the Bargaining Table to the Ballot Box: Political Effects of Right to Work Laws ," January 20, 2018. Many people who oppose, and many people who support, right-to-work laws think the following: because such laws weaken unions' power to use employees' dues to contribute to political campaigns,right to work laws will cause there to be fewer union contributions to Democratic candidates than otherwise. It turns out that both sides are right. Feigenbaum et al find, as the abstract above says, that union contributions are lower than otherwise in such states and that this makes the vote for Democratic politicians lower than otherwise.

The ideal libertarian solution to the issue of unions is not right to work laws; the ideal solution is freedom of association for employees and employers. People should be free to join unions or not, and people should be free to work for employers who require unions, for employers who don't require unions, and for employers who refuse to deal with unions. The freedom of association applies to employers as well as employees. That means that if some employers want a requirement that every employee be a member of a union or, if not a member, be required to pay dues to a union, this should be allowed. Right to work laws prevent this. However, the number of employers in this category is likely to be very small. So right to work laws are a substantial step toward freedom of association."

New research from Denmark finds that motherhood and a ‘child penalty’ are responsible for the gender earnings gap

From Mark Perry.
"That is the conclusion of a new NBER research paper by economists from Princeton, the London School of Economics and the Danish Ministry of Finance titled “Children and Gender Inequality: Evidence from Denmark” — that the gender pay gap in Denmark is explained largely by a “child penalty” that adversely impacts the earnings of mothers more than fathers, rather than by gender discrimination. Here are excerpts from the paper’s abstract, introduction and conclusion:
Abstract
Despite considerable gender convergence over time, substantial gender inequality persists in all countries. Using Danish administrative data from 1980-2013 and an event study approach, we show that most of the remaining gender inequality in earnings is due to children. The arrival of children creates a gender gap in earnings of around 20% in the long run, driven in roughly equal proportions by labor force participation, hours of work, and wage rates. Underlying these “child penalties”, we find clear dynamic impacts on occupation, promotion to manager, sector, and the family friendliness of the firm for women relative to men. Based on a dynamic decomposition framework, we show that the fraction of gender inequality caused by child penalties has increased dramatically over time, from about 40% in 1980 to about 80%in 2013.
Introduction
For a range of labor market outcomes, we find large and sharp effects of children: women and men evolve in parallel until the birth of their first child, diverge sharply immediately after child-birth, and do not converge again. Defining the “child penalty” as the percentage by which women fall behind men due to children, we find that the long-run child penalty in earnings equals about 20% over the period 1980-2013. This should be interpreted as a total penalty including the costs of children born after the first one, and we show that the penalty is increasing in the number of children. The earnings penalty can come from three margins — labor force participation, hours of work, and the wage rate — and we find sharp effects on all three margins that are roughly equal in size.
We show that children affect the job characteristics of women relative to men in a way that favor family amenities over pecuniary rewards. Specifically, just after the birth of the first child, women start falling behind men in terms of their occupational rank (as ordered by earnings or wage rate levels) and their probability of becoming manager. Moreover, women switch jobs to firms that are more “family friendly” as measured by the share of women with young children in the firm’s workforce, or by an indicator for being in the public sector which is known to provide flexible working conditions for parents. The importance of the family friendliness of occupations and firms for gender equality has been much discussed in recent work, but here we provide clean event study evidence that these qualitative dimensions directly respond to the arrival children.
Finally, we note that children may have two conceptually different effects on labor market outcomes. One is a pre-child effect of anticipated fertility: women may invest less in education or select family friendly career paths in anticipation of motherhood. The other is a post-child effect of realized fertility: women changing their hours worked, occupation, sector, firm, etc., in response to actual motherhood. The event study approach cannot capture pre-child effects; it is designed to identify post-child effects conditional on pre-child choices. If women invest less in education and career in anticipation of motherhood, then our estimated child penalties represent lower bounds on the total lifetime impacts of children.
Conclusions
1. The impact of children on women is large and persistent across a wide range of labor market outcomes, while at the same time men are unaffected. The female child penalty in earnings is close to 20% in the long run. Underlying this earnings penalty, we find sharp impacts of children on labor force participation, hours worked, wage rates, occupation, sector, and firm choices. Together, these findings provide a quite complete picture of the behavioral margins that adjust in response to parenthood and how strongly gendered these margins are. .
2. We have decomposed gender inequality into what can be attributed to children and what can be attributed to other factors. We have shown that the fraction of child-related gender inequality has increased dramatically over time, from around 40% in 1980 to around 80% in 2013. Therefore, to a first approximation, the remaining gender inequality is all about children. Our decomposition analysis represents a re-orientation of traditional gender gap decompositions: instead of studying the extent to which men and women receive unequal pay for equal work (the unexplained gap after controlling for human capital and job characteristics, but not children), we study the extent to which they receive unequal pay as a result of children (but not necessarily for equal work). The unexplained gap in traditional decomposition analyses is often labeled “discrimination,” but our analysis highlights that the unexplained gap is largely due to children. This does not rule out discrimination, but implies that potential discrimination operates through the impacts of children.
Related: It’s interesting to note that Denmark has a very generous paid parental leave policy of 52 weeks per child, here’s a summary:
A mother can take four weeks off before the child is born as pregnancy leave. After birth, mothers are entitled to take 14 weeks of maternity leave. During these first 14 weeks, the father can take two consecutive weeks off as well. Afterwards, both parents are entitled to split 32 weeks of parental leave, which can be further extended by another 14 weeks.
According to the law, parents can receive a total of 52 weeks of paid leave per child from the government. The amount that the parents are entitled to is less than the amount of a full salary. However, many companies are likely to have an employee agreement in place in which they pay your full salary for a period of time. Many private companies in Denmark have this kind of arrangement. In this situation, the amount paid by the government is reimbursed to the company, which in turn pays the parent’s full salary. At the point in which the employee’s right to full salary during maternity/parental leave at the company stops, the government benefits are then paid directly to the employee.
Of course, to fund those 52 guaranteed weeks of parental leave per child, Denmark has some of the highest taxes in the world, with a personal income tax rate that reaches 55.8% and a sales tax rate of 25%!

MP: Overall, the research from Denmark that concludes that personal career and family choices explain gender differences in earnings more than gender discrimination are not surprising, and are in fact intuitively obvious! The results are also consistent with this conclusion from a 2009 report for the US Department of Labor titled “An Analysis of the Reasons for the Disparity in Wages Between Men and Women”:
This study leads to the unambiguous conclusion that the differences in the compensation of men and women are the result of a multitude of factors and that the raw wage gap should not be used as the basis to justify corrective action. Indeed, there may be nothing to correct. The differences in raw wages may be almost entirely the result of the individual choices being made by both male and female workers."

Tuesday, January 23, 2018

Oxfam pollutes the debate on poverty with phony statistics and false narratives

By Ryan Bourne of Cato.

"Credit to Oxfam’s communications team. Each year, riding on the coattails of the jamboree in Davos, they manage to make a huge splash about global wealth inequality.

And every year, it is pointed that, as Tim Worstall explained on CapX last January, wealth is not some fixed pie. It is usually accumulated through entrepreneurial activity that fulfils wants and needs, enhancing global welfare. Sadly, most readers who pay only a passing interest in the story will miss this nuance and receive claims such as “82 per cent of all wealth created in the last year went to the top 1 per cent” with the shock they are designed to trigger.

Oxfam is, of course, a development charity. Their implicit message, amplified through major broadcasting outlets such as the BBC, is that the wealth of global rich causes the poverty of the poor. But where exactly is the evidence that more interventionist government is the way to reduce global poverty? In fact, recent economic history suggests the opposite: global poverty has plummeted as major countries have liberalised and ceased trying to “manage” their economies in the way Oxfam wants.

It would bad enough if Oxfam’s ideological bias was blinding the organisation to what works in the fight against poverty. But the charity also appears to be playing fast and loose with the facts. Take just one of the claims in their report, subsequently republished on the BBC website. Oxfam makes the astonishing claim that “two-thirds of billionaires’ wealth is the product of inheritance, monopoly and cronyism”. Given previous assessments by ForbesWealth-X and others have found that around 60 per cent of American billionaires are “self-made,” this seems a particularly striking statistic, in which monopolies and cronyism are doing a lot of heavy lifting.
Intrigued by this finding, Sam Dumitriu of the Adam Smith Institute sought out its source. He found that the methodology was devised in an Oxfam discussion paper called Extreme Wealth Is Not Merited by Didier Jacobs. Overall, that study concludes that 19 per cent of wealth arises from monopolies, with the rest of the 65 per cent coming from inheritance or cronyism. To calculate the share coming from inheritances, he used Forbes data, which chalks up all wealth for individuals who inherited fortunes as “inherited wealth”, regardless of whether that wealth has grown substantially since the inheritance. This figure, by definition, ignores any extra wealth generated by that inheritance and so is hardly representative of  genuine passive inheritance.

The cronyism figure is more speculative still. It includes “wealth mainly acquired in a corruption-prone country and state-dependent industry (high presumption of cronyism)” or “wealth mainly acquired in the mining, oil and gas industry.” Again, while in many countries these industries do depend on state favours and are prone to crony capitalism, it seems a little much to suggest that all wealth in these industries in certain countries can be recorded as wealth driven by cronyism.

Oxfam’s real agenda becomes clear, though, when we look at their methodology for the monopoly portion of the claim. As Dumitru has described in detail, Jacobs first defines monopoly to include any industry with “network effects.” By construction then, firms such as Facebook and Google would be monopolists, even though their existence has been overwhelmingly beneficial for consumers. He then makes the same intellectual leap again, asserting that all wealth coming from the IT industry should be recorded as “monopoly”. Not content with this intellectual bankruptcy, this same blanket approach is applied to finance, health care, legal industries and wealth acquired as a CEO of a company, if the billionaire neither founded nor inherited the business.

To claim this is shoddy methodology which hugely overestimates wealth acquired by “bad” means is a spectacular understatement. Again and again, the mere possibility of cronyism or a theoretical argument for market failure in an industry is taken to prove that all billionaire wealth in that industry is ill-gained. That this kind of report is being taken seriously and propagated by our state broadcaster is a travesty.
We should not give Oxfam a free pass or refuse to criticise them for publishing and distributing such nonsense because they happen to be a charity or sometimes do some good. To do so would be like ignoring socialist failures because the revolutionaries had “good intentions”.

Oxfam increasingly pollutes our discourse with phony statistics and false narratives in a highly politicised way. These findings are being used to call for a policy shift – a turn away from market-based capitalism, which has lifted billions around the world out of poverty. No doubt there will be plenty of wanna-be world planners at Davos this week who will lap up the message – the Shadow Chancellor, John McDonnell, will be one of them. But Oxfam’s political agenda goes against the history of the economic development they purport to want.

Perhaps more importantly though, it’s based on very bad analysis. And it’s time our media held them to higher standards, rather than taking their politicised work at face value."

Tax Reform Outperforms Government Programs on Community Investment

By Daniel Press of CEI.

"Tax reform is the gift that keeps on giving. Americans for Tax Reform has documented the ever-growing list of companies providing pay raises, bonuses, 401(k) increases, and new capital investment. But there’s even more good news. Today, JP Morgan announced that it will invest $20 billion over five years in loans for small businesses and low- and moderate-income communities, as well as in wage increases and philanthropic investments.

The announcement is significant in light of decades of failed federal government policies to increase minority and low-income home ownership. As I have discussed before, despite 40 years of government programs and subsidies, the racial homeownership gap has barely improved. In 1976, 44 percent of African-American families and 43 percent of Hispanic families owned their own home. By 2016, Hispanics improved 3 percent and African Americans fell by 2 percent.

The Community Reinvestment Act is one such example, which forces banks to make loans to residents of low- and moderate-income neighborhoods. Fannie Mae and Freddie Mac, the two government mortgage behemoths that purchase home loans from lenders to support the market, are another. This government meddling in housing policy helped fuel the housing bubble that lead to the 2007-2008 financial crisis. Fee market reforms are now proving a more effective means of achieving the goal of expanding home ownership.

JP Morgan plans to increase community investments by 40 percent over five years; mortgage lending to expand homeownership in low- and moderate-income communities by 25 percent; and small business lending by $4 billion. The bank attributes the increase to “the firm’s strong and sustained business performance, recent changes to the U.S. corporate tax system and a more constructive regulatory and business environment.”

My CEI colleague Trey Kovacs has already outlined how tax reform has already achieved what the union-backed “Fight for $15” movement promised, with numerous companies raising wages beyond $15 an hour. Now it seems that tax reform and deregulation are achieving what decades of federal government policy could not—lending and investing in low-income communities to advance economic opportunity. Imagine what could be achieved if Congress and the White House put their efforts behind substantial financial reform."

Don Boudreaux on the bailouts of General Motors and Chrysler

See There’s Much More to the Matter.

"Here’s a letter to the Washington Post:
E.J. Dionne insists that the Obama administration’s bailouts of General Motors and Chrysler are evidence that “government works” (“Don’t buy the spin. Government works.” Jan. 22).  Forget that, as Mr. Dionne admits, taxpayers got back only seven out of every eight of the 80 billion dollars of the bailout money, for a return of negative (!) twelve-and-a-half percent.
Instead, recognize that the most serious arguments against bailouts are not the superficial claims that Mr. Dionne quotes from the likes of Mitt Romney and Rush Limbaugh.  The correct economic arguments against bailouts all point more deeply to what is not seen.  Yes, we all see that resources directed to G.M. and Chrysler by the bailouts ensured that these companies survived intact.  No serious person ever doubted this outcome.  But what Mr. Dionne and too many others don’t see are real costs and hidden consequences – costs and consequences that are revealed by asking probing questions.
For example: What would G.M. and Chrysler look like today without the bailouts?  Contrary to Mr. Dionne’s assumption, failure to bail out these companies was not destined to lead to their total demise.  Instead, they would have gotten private funding likely on the condition that they scale down their operations.  Might such reductions in size mean that today these companies would be better able to withstand future financial crises – and, hence, be less likely to ‘need’ bailouts in the future?
Another question: how would the resources commandeered by Uncle Sam for G.M. and Chrysler otherwise have been used?  Mr. Dionne assumes that these resources were and would have remained idle.  But that’s incorrect.  If not directed artificially by government to G.M. and Chrysler, these resources would have been directed naturally by the market to other productive uses.  What goods and services are we Americans today not producing and consuming because of the bailouts?  What jobs do Americans today not have because of the bailouts?
And finally: what expectations did those bailouts create, and what are the consequences of those expectations?  Because large and highly visible firms are now more likely to be bailed out, executives of such firms can be more careless in their decision-making.  The future consequences of such carelessness almost surely include higher costs of production, lower real wages, and a further shifting of executives’ attention away from meeting the demands of consumers spending their own money and toward gratifying the whims of politicians spending other people’s money.
Sincerely,
Donald J. Boudreaux
Professor of Economics
and
Martha and Nelson Getchell Chair for the Study of Free Market Capitalism at the Mercatus Center
George Mason University
Fairfax, VA  22030
…..
When it comes to economics, E.J. Dionne is like far too many pundits: he mistakes that which is visibly and vividly in front of his nose for all of economic reality."

Monday, January 22, 2018

Canada's government-run healthcare system provides objectively worse care than the United States' more market-oriented system

See Bernie, Stop Fibbing About Canada's Single-Payer Disaster by Sally C. Pipes. She s President, CEO, and Thomas W. Smith Fellow in Health Care Policy at the Pacific Research Institute. Her latest book is "The Way Out of Obamacare" (Encounter 2016). Excerpt:
"Canada's government-run healthcare system provides objectively worse care than the United States' more market-oriented system. Canadians must endure long wait times, lack access to the latest medical technology, and are more likely to die of diseases like cancer.

This rationing of care is by design. In Canada, health care is free at the point of service. That means people pay nothing when they visit the doctor.

Since Canadian health officials can't use co-pays or co-insurance to steer people toward the most cost-effective treatments and providers, they have only one way to constrain spending. And that's by limiting access to medical services.

In Ontario, for example, the number of hospital beds has plummeted from more than 33,000 in 1990 to just 19,000 in 2016.  Nationwide, six in ten nurses believe that hospitals don't have enough staff members.

Consequently, wait times are terrible. According to the Fraser Institute, a Canadian think tank, the median patient waited about 20 weeks for "medically necessary" treatments and procedures last year.  That's more than double the wait time in 1993.

In the United States, only 8% of patients wait more than four months for non-emergency surgery.
Canadian patients who need specialized care wait even longer. For example, the median patient waits nearly 47 weeks for brain surgery.  One Ontario doctor estimated that the earliest that one of her patients could see a neurologist was four and a half years.

Patients in need of orthopaedic surgery wait more than 15 weeks to see a specialist, after receiving a referral from their general practitioner. Then they typically wait another 22 weeks to actually receive treatment.

Despite the demand for care that these waits signify, dozens of orthopedic surgeons are unemployed in Canada. "Scores of empty operating rooms (are) sitting idle every night across Canada," wrote Adam Kassam, chief resident in the department of Physical Medicine & Rehabilitation at the University of Western Ontario, in an opinion piece for the Canadian Broadcast Company.

Waits are particularly long in rural provinces. Last year, the median patient in New Brunswick waited 39 weeks between referral from a general practitioner and receipt of treatment from a specialist. In Nova Scotia, the media patient waited nearly 35 weeks.

Even Canadians in need of emergency care must stand in line. Three in ten patients wait at least four hours before being seen by a doctor at the ER.  Only 11% of Americans wait that long.

People seeking routine care fare no better. Last year, nearly six in ten Canadians were unable to secure a doctor's appointment within 48 hours.  By contrast, 55% of Americans are able to walk into a doctor's office or schedule a same-day or next-day appointment.

This rationing forces patients to wait in pain as their conditions worsen. Consider the story of Jennifer White. The 26-year old Canadian suffers from intense seizures. After visiting her general practitioner this past February, Jennifer was told the earliest she could see an epilepsy specialist was more than a year later, according to CTV News, a Canadian broadcaster.

Or consider Angela Burgera, an art gallery owner in Alberta. She experienced severe hip pain that hindered her walking. After being put on a long wait list to receive a hip replacement, she shelled out thousands of dollars to seek treatment in the Cayman Islands, she told the Huffington Post earlier this year.

She's not alone. In 2014 and 2015, a whopping 100,000 patients left Canada to obtain treatment elsewhere."

Building energy codes result in more distortions for lower-income households

See The Distributional Effects of Building Energy Codes By Christopher D. Bruegge, Tatyana Deryugina and Erica Myers. Here is the abstract:

"State-level building energy codes have been around for over 40 years, but recent empirical research has cast doubt on their effectiveness. A potential virtue of standards-based policies is that they may be less regressive than explicit taxes on energy consumption. However, this conjecture has not been tested empirically in the case of building energy codes. Using spatial variation in California’s code strictness created by building climate zones, combined with information on over 350,000 homes located within 3 kilometers of climate zone borders, we evaluate the effect of building energy codes on home characteristics, energy use, and home value. We also study building energy codes’ distributional burdens. Our key findings are that stricter codes create a non-trivial reduction in homes’ square footage and the number of bedrooms at the lower end of the income distribution. On a per-dwelling basis, we observe energy use reductions only in the second lowest income quintile, and energy use per square foot actually increases in the bottom quintile. Home values of lower-income households fall, while those of high-income households rise. We interpret these results as evidence that building energy codes result in more distortions for lower-income households and that decreases in square footage are responsible for much of the code-induced energy savings." 

Sunday, January 21, 2018

Forty-five years ago a run of cold winters caused a “global cooling” scare.

See The mysterious cycles of ice ages: Orbital wobbles, carbon dioxide and dust all seem to contribute by Matt Ridley. Excerpt:

"Forty-five years ago a run of cold winters caused a “global cooling” scare. “A global deterioration of the climate, by order of magnitude larger than any hitherto experienced by civilised mankind, is a very real possibility and indeed may be due very soon,” read a letter to President Nixon in 1972 from two scientists reporting the views of 42 “top” colleagues. “The cooling has natural causes and falls within the rank of the processes which caused the last ice age.” The administration replied that it was “seized of the matter”.

In the years that followed, newspapers, magazines and television documentaries rushed to sensationalise the coming ice age. The CIA reported a “growing consensus among leading climatologists that the world is undergoing a cooling trend”. The broadcaster Magnus Magnusson pronounced on a BBC Horizon episode that “unless we learn otherwise, it will be prudent to suppose that the next ice age could begin to bite at any time”.

Newsweek ran a cover story that read, in part: “The central fact is that, after three quarters of a century of extraordinarily mild conditions, the Earth seems to be cooling down. Meteorologists disagree about the cause and extent of the cooling trend, as well as over its specific impact on local weather conditions. But they are almost unanimous in the view that the trend will reduce agricultural productivity for the rest of the century.”

This alarm about global cooling has largely been forgotten in the age of global warming, but it has not entirely gone away. Valentina Zharkova of Northumbria University has suggested that a quiescent sun presages another Little Ice Age like that of 1300-1850. I’m not persuaded. Yet the argument that the world is slowly slipping back into a proper ice age after 10,000 years of balmy warmth is in essence true. Most interglacial periods, or times without large ice sheets, last about that long, and ice cores from Greenland show that each of the past three millennia was cooler than the one before.

However, those ice cores, and others from Antarctica, can now put our minds to rest. They reveal that interglacials start abruptly with sudden and rapid warming but end gradually with many thousands of years of slow and erratic cooling. They have also begun to clarify the cause. It is a story that reminds us how vulnerable our civilisation is. If we aspire to keep the show on the road for another 10,000 years, we will have to understand ice ages.The oldest explanation for the coming and going of ice was based on carbon dioxide. In 1895 the Swede Svante Arrhenius, one of the scientists who first championed the greenhouse theory, suggested that the ice retreated because carbon dioxide levels rose, and advanced because they fell. If this were true, he thought, then industrial emissions could head off the next ice age.

Burning coal, Arrhenius said, was therefore a good thing: “By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates.”

There is indeed a correlation in the ice cores between temperature and carbon dioxide. There is less CO2 in the air when the world is colder and more when it is warmer. An ice core from Vostok in Antarctica found in the late 1990s that CO2 is in lock-step with temperature -- more CO2, warmer; less CO2, colder. As Al Gore put it sarcastically in his 2006 film An Inconvenient Truth, looking at the Vostok graphs: “Did they ever fit together? Most ridiculous thing I ever heard.” So Arrhenius was right? Is CO2 level the driver of ice ages?

Well, not so fast. Inconveniently, the correlation implies causation the wrong way round: at the end of an interglacial, such as the Eemian period, over 100,000 years ago, carbon dioxide levels remain high for many thousands of years while temperature fell steadily. Eventually CO2 followed temperature downward. Here is a chart showing that. If carbon dioxide was a powerful cause, it would not show such a pattern. The world could not cool down while CO2 remained high.

In any case, what causes the carbon dioxide levels to rise and fall? In 1990 the oceanographer John Martin came up with an ingenious explanation. During ice ages, there is lots of dust blowing around the world, because the continents are dry and glaciers are grinding rocks. Some of that dust falls in the ocean, where its iron-rich composition fertilizes plankton blooms, whose increased photosynthesis draws down the carbon dioxide from the air. When the dust stops falling, the plankton blooms fail and the carbon dioxide levels rise, warming the planet again.

Neat. But almost certainly too simplistic. We now know, from Antarctic ice cores, that in each interglacial, rapid warming began when CO2 levels were very low. Temperature and carbon dioxide rise together, and there is no evidence for a pulse of CO2 before any warming starts, if anything the reverse. Well, all right, said scientists, but carbon dioxide is a feedback factor – an amplifier.
Something else starts the warming, but carbon dioxide reinforces it. Yet the ice cores show that in each interglacial cooling returned when CO2 levels were very high and they remained high for tens of thousands of years as the cooling continued. Even as a feedback, carbon dioxide looks feeble."