Friday, May 26, 2017

David Henderson Corrects Larry Summers On Trade (gains to consumers matter)

See Larry Summers Trumps Trump at EconLog.
"On agriculture, China reiterated a promise that it has broken in the past to let in more beef. Previously, we, as reciprocity, had been withholding publication of a permissive rule on Chinese poultry, but we have now relented. Advantage China.
This is from Larry Summers, "Trump's 'China Deal' is only a good deal for China," May 24.

HT2 Mark Thoma.

In estimating "advantage," what factor is Larry missing? U.S. consumers who like poultry. There are a lot of us. When you see someone forgetting even to point out that our consumers gain when foreign producers send us cheaper products, what prominent U.S. politician does that sound like? That's right: Trump. Thus the titled of this post: Larry Summers, in his rhetoric, is starting to imitate Donald Trump.

Back in May 2000, I wrote an article in Fortune titled "What Clinton and Gore Don't Say." In it, I pointed out that U.S. trade negotiators rarely point out the benefits to consumers from free trade. I ended by writing:

In the negotiation process, the U.S. treats cuts in its trade restrictions as concessions rather than as the benefits they are. That's why the consumers' gains get lost in the shuffle. Economists like U.S. Treasury Secretary Lawrence Summers understand that. But U.S. Trade Negotiator Charlene Barshefsky and Vice President Gore? I'm not so sure.

Now, I'm no longer sure about Larry. And, in a way, he's even worse than Trump. He writes:

In addition to the leverage we sacrificed by committing to issue the poultry rule, we made other meaningful concessions. First, we agreed to allow exports of liquefied natural gas from the US to China. To at least a small extent that would mean higher heating costs for U.S. consumers and higher energy costs for U.S. producers.

Get it? Normally, even the Trumps and Summers of the world will at least regard as a gain an increase in U.S. exports due to declines in trade barriers. But because this particular gain in U.S. exports is due to a decline in a U.S. trade barrier, Larry counts it as a loss. It is a loss for U.S. consumers, but it's not hard to show that it's a net gain to the United States when we include the gains to LNG producers."

Who’d a-thunk it? Like most central planning, public transit systems are very costly and often don’t serve the public very well?

From Mark Perry.
"Some recent news reports on the declines in the use of mass transit systems across America:

Example 1: L.A. bus ridership continues to fall; officials now looking to overhaul the system
Example 2: CARTA’s (Chattanooga, TN) Main Route Suffers Another Blow As Overall Ridership Continues To Drop
Example 3: Miami-Dade shrinking Metrorail hours as ridership dips
Example 4: Subway Ridership Declines in New York. Is Uber to Blame?
Example 5: City Colleges (Chicago) has paid $3 million for a bus shuttle with few riders

A few related items here……

Related 1: “Does America Need More Urban Rail Transit?” is the title of a recent Manhattan Institute report, and I think the answer is “No.” Here’s an excerpt from the abstract:
Low-density U.S. cities with new rail-transit systems have experienced limited ridership and single-digit transportation market share. Federal funds should be directed to rebuilding aging rail transit in cities where it already exists and where it serves a critical transportation function. In most cases, state and local governments should focus on providing transit service via traditional buses, not building new rail lines.
Related 2: Transit Crime Is on the Rise, here’s an excerpt:
Is there an upsurge in crime on and around transit, and if so, why? A few days ago, a Portland woman was stabbed at a light-rail stop, supposedly by a complete stranger. The very next day, a remarkably similar report came out of Tempe, Arizona, except in this case police said the victim and alleged perpetrator were acquaintances.
A month ago, a gang of at least 40 teenagers boarded a BART train and, while some held the doors to prevent the train from leaving the station, robbed seven passengers and beat up two or more who refused to cooperate. A few days before that, someone shot and killed a passenger and wounded three more on board a MARTA train in Atlanta. After arresting a suspect, police called it an “isolated incident,” but it doesn’t sound so isolated anymore. New York City is enjoying a drop in crime–except on board transit vehicles, where crime is up 26 percent.
… The numerous reports of transit crimes in the last few weeks are only going to depress ridership even further.
Related 3: From the new report “A Canadian town wanted a transit system. It hired Uber,”:
Uber, the global car-hailing service, has fought its way into resistant cities around the world, despite being hit by raw eggs and rush-hour roadblocks in Montreal and Toronto, fires in Paris and smashed windshields in Mexico City. But in Innisfil, a small yet sprawling Canadian town north of Toronto, the company has met a somewhat different reception. Town leaders have embraced the service as an alternative to costly public transportation, causing local taxi companies to worry about the effect on their business.
Innisfil is a rural quadrilateral-shaped town of about 104 square miles, on the southwestern shore of Ontario’s Lake Simcoe. It has no public transportation other than stops on a regional bus line. This week, the town inaugurated a pilot program for what Uber says is its first full ridesharing-transit partnership, providing subsidized transportation for the town’s 36,000 people.
Related 4: “10 Reasons to Stop Subsidizing Urban Transit” by Cato’s Randall O’Toole."

Thursday, May 25, 2017

A 2016 report from the Pentagon claims that 22% of the military’s infrastructure is unnecessary

See Trump's Cost Cutting May Involve Military Closures, But Cities Shouldn't Worry by Adam Millsap of Mercatus.
"Yet there is little evidence that base closures have significant adverse effects on local economies. One study examining base closures from 1970 to 1994 found that the effect of a closure on local (county-level) employment was limited to the actual number of military jobs lost and that there was no negative employment effect on other sectors of the economy. In fact, it actually found evidence of indirect job creation rather than job destruction, though the effect was small. The study also found that, on average, local per capita income was unaffected by a closure.

Another study that explicitly takes into account reutilization of military infrastructure after a closure found that the long-run effects of a closure on local employment were positive overall. In addition to the reutilization of valuable infrastructure, the authors attribute the positive effect to increased federal education assistance that often accompanies a base closure, increased spending by military retirees on non-military-base retailers (instead of the BX and PX) and an increase in optimism as people adjusted to the new circumstances. The authors note that while base closures are never appealing to the workers and communities directly impacted, “the overall picture is most certainly not one of doom and gloom.”"

"But as the research shows, many of the places affected by base closures adapt and turn out just fine: The infrastructure can be refurbished and reused by private companies .

Reutilization is especially important when considering the economic impact of military facilities. Since the products and services provided by the military are not sold on a market and subject to the signals of profit and loss, we have little knowledge about how much people actually value them. This makes it hard to know whether the military’s inputs, including land and infrastructure, are being put to their highest-valued use. And if the land and infrastructure are not being put to their highest-valued use, the economy is not operating as efficiently as it could be.

A former military facility in Key West, the Truman Annex, was developed after being relinquished by the military, and today its hotels and rental homes contribute to the area’s robust tourism industry."

ACA Medicaid Expansion: A Lot of Spending of Little Value

By Brian Blase of Mercatus
"In new research published by the Mercatus Center, I analyze the causes and impact of the much higher-than-expected enrollment and spending associated with the Affordable Care Act (ACA) Medicaid expansion. Though unpredicted by Washington experts, the results were predictable. The federal government’s 100% financing of state spending on expansion enrollees has led states to boost enrollment and create high payment rates. (See this 2-minute Mercatus video for additional information on this significant development.)

In states that have expanded, enrollment and per enrollee spending are nearly 50% higher than predicted. While interest groups within the states—particularly hospitals and insurers—benefit from the higher spending being charged to federal taxpayers, substantial evidence suggests much of this new spending is wasted or provides little value for its intended recipients.

An important 2015 study showed that Medicaid expansion enrollees obtain low value through the program. Moreover, an increasing amount of spending on the program is lost to waste, fraud, and abuse. The Wall Street Journal highlighted a new government report showing that improper Medicaid spending exploded between 2013 and 2016. Improper payments amounted to about $67 billion in 2016, a $41 billion increase from the estimated $26 billion in 2013. The large increase in improper Medicaid payments has occurred while the ACA Medicaid expansion took effect, suggesting that the expansion is the main cause of the stunning rise. (Interestingly, the Department of Health and Human Services has pulled the report from the Internet.)

Perverse Incentives Produce Lots of Waste

Under the ACA, the federal government reimburses 100% of state spending on expansion enrollees—non-disabled, working-age adults with income between the state’s previous eligibility thresholds and 138% of the federal poverty level ($16,394 in 2016). After this year, the federal share gradually phases down until 2020 when it reaches 90%, where it is scheduled to remain.

Common sense suggests that a jurisdiction is more likely to increase spending on an area when the costs can largely be passed to other jurisdictions. This type of financing structure also lessens a jurisdiction’s incentive to ensure that the spending provides high value with low amounts of waste.

ACA Medicaid Explosion

Medicaid, already on an unsustainable cost-growth trajectory before the ACA, has experienced unprecedented enrollment and spending growth since 2013. Medicaid spending in 2015 was nearly $100 billion above the 2013 amount.

Medicaid expansion enrollment and spending is higher than projected even though not as many states as expected have adopted the expansion. My research shows the difference in the Congressional Budget Office’s (CBO) Medicaid expansion enrollment and spending projections over time. The first figure shows CBO’s most recent estimate of expansion enrollment along with CBO’s estimates from 2010, 2014, and 2015.

Enrollment is much higher than CBO expected when the ACA passed in 2010, and it is also significantly higher, particularly in 2017 and beyond, than estimated in both CBO’s 2014 and 2015 reports. Essentially, this means that far more people—roughly 50% more—have enrolled and are projected to enroll in Medicaid in the states that expanded than was expected by CBO previously. In addition to higher-than-expected enrollment, spending per newly eligible Medicaid enrollee is much greater than expected. As I wrote in July when the Obama administration released the 2015 Medicaid actuarial report, government spending on newly eligible enrollees equaled about $6,366 in 2015—an amount 49% higher than its projection of $4,281 from just one year earlier.

Both higher-than-expected enrollment and spending per enrollee has resulted in the Medicaid expansion being much more costly than projected. For example, in April 2014, CBO projected that the Medicaid expansion would cost $42 billion in 2015. The actual cost was $68 billion, about 62% higher.

The second figure shows CBO’s projections of federal spending on the Medicaid expansion and how CBO’s most recent projection of the cost are substantially above previous expectations.

Using CBO’s current projections of state adoption of the expansion for its previous estimates shows that federal Medicaid spending between 2016 and 2024 is $232 billion in excess of its April 2014 estimates.

Both figures adjust CBO’s previous year estimates for its current assumptions about state adoption of the expansion. CBO now expects states to adopt the expansion at a slower rate than it has previously projected. In 2010, before the Supreme Court made Medicaid expansion optional for states, CBO expected all states would adopt the expansion. This adjustment allows for a better comparison of enrollment and spending because it holds constant CBO’s assumptions about the percentage of the newly eligible Medicaid population residing in expansionary states.

Too Little Value from Medicaid Expansion

Prior to the ACA, when states shouldered their traditional share of Medicaid spending (an average of 43%), only Vermont and the District of Columbia concluded that the tradeoffs—higher state taxes and reduced spending elsewhere—justified expanding Medicaid to the ACA expansion population. It turns out that states that did not expand Medicaid prior to the ACA almost certainly made a wise cost-benefit calculation.

A 2015 study from economists at Harvard, MIT, and Dartmouth, assessing an earlier Medicaid expansion in Oregon to a similar population to the ACA expansion, found that “[a]cross a variety of alternative specifications … Medicaid’s value to recipients is lower than the government’s costs of the program, and usually substantially below.” They estimated that the “welfare benefit to recipients from Medicaid per dollar of government spending range from about $0.2 to $0.4.” Oregon Medicaid expansion enrollees did not have significant improvements in blood pressure, cholesterol, or blood sugar relative to people who did not enroll in Medicaid.

Reform Medicaid, Stop Viewing Program as Economic Stimulus

In order to increase the value that enrollees receive from Medicaid and lessen the amount lost to waste, fraud, and abuse, it is necessary to change the central incentives underlying the federal-state partnership. In particular, the incentives of the ACA’s elevated reimbursement rate lead policymakers to view Medicaid as an engine for economic stimulus instead of as a welfare program. For example, according to the White House:

“By expanding Medicaid, States can pull billions in additional Federal funding into their economies every year, with no State contribution over the next three years and only a modest one thereafter for coverage of newly eligible people.”

A study by Deloitte Consulting and the University of Louisville projects that the ACA’s Medicaid expansion will add 40,000 jobs and $30 billion to Kentucky’s economy through 2021. The problem with this and similar studies is that they assess the decision of a state in isolation without factoring in other states’ decisions regarding expansion. For example, Kentucky is worse off when other states expand, because her citizens pay federal taxes to finance health benefits that accrue only to individuals in those other states.

Economist Robert Book points out that the American economy is worse from the ACA expansion “because taxation itself has a negative effect on economic activity, over and above the amount of tax collected.” Book estimates a reduction of $174 billion in economic activity over a 10-year period if all states expand Medicaid. He also estimated a total job loss of more than 200,000 positions from 2014 to 2017 if all states expanded Medicaid.

Sensible Medicaid reform has two central goals: reduce the unsustainable trajectory of spending and produce better outcomes for people most in need. The ACA Medicaid expansion significantly adds to the unsustainable spending trajectory of the program, likely fails to produce health outcomes or value to recipients worth the corresponding cost, and creates a large federal government bias toward nondisabled, working-age adults at the expense of traditional Medicaid enrollees. Moving Medicaid back in the right direction requires ending the ACA’s elevated federal reimbursement rate that has given rise to these problems."

Wednesday, May 24, 2017

Allan Meltzer Meltzer’s history book found that the Fed had rarely come up with just the right medicine for the economy

See Allan Meltzer Made a Career as the Chief Scourge of Financial Regulators: Professor wrote 2,100 pages on Fed history and found little to admire. Obituary from the WSJ. By James R. Hagerty. Excerpts:
"Allan Meltzer devoted a large share of his scholarly life to telling the Federal Reserve and other financial regulators, politely but firmly, that they were falling down on the job.

Dr. Meltzer’s two-volume history of the U.S. central bank, stretching beyond 2,100 pages, found that the Fed had rarely come up with just the right medicine for the economy. He chastised Fed officials for paying too much heed to the “daily yammering” of financial markets and too little to the long-term health of the economy.

The Carnegie Mellon University economist also was a co-founder of the Shadow Open Market Committee, a gathering of monetarists and mavericks who since 1973 have advised and second-guessed the Fed.

Through his books and articles, Dr. Meltzer became an influential opponent of what he saw as excessive regulation of banks and of bailouts for those that misbehaved. If banks were allowed to fail, he argued, shareholders and executives would learn to be more prudent. “Capitalism without failure is like religion without sin,” he often said.

Dr. Meltzer died May 8 of pneumonia at a hospital in Pittsburgh. He was 89."

"He evolved into a libertarian. Capitalism, he wrote in one essay, “works well with people as they are, not as someone would like to make them.”"

"At the University of California, Los Angeles, he earned master’s and doctoral degrees in economics."
"In 1957, he joined the faculty of what is now Carnegie Mellon in Pittsburgh."
"He was on the President’s Economic Policy Advisory Board during the Reagan administration and served as a consultant to congressional committees, the U.S. Treasury and foreign central banks. In 1999 and 2000, he headed a congressional panel seeking to improve the performance of the World Bank and International Monetary Fund.

His astoundingly deep dive into Fed history began in 1963 when U.S. Rep. Wright Patman, a Texas Democrat, asked him to do a study of the institution. His original studies “were hastily written to meet congressional deadlines,” he wrote. He kept digging and spent 14 years writing a history of the Fed. In 2003, The Wall Street Journal declared his first volume “masterly.” Former Fed Chairman Alan Greenspan wrote in the preface that the history was “fascinating and valuable.”

During the Depression of the 1930s, the Fed failed to prevent a steep fall in the money supply, missing a chance to alleviate the crisis, Dr. Meltzer found. Later errors by the central bank made inflation worse and contributed to the housing market collapse that helped precipitate the 2008-09 recession, he wrote.

Though he thought the Fed was usually too concerned with the short term, he said one exception was the fight led by Fed Chairman Paul Volcker against inflation in the early 1980s. Mr. Volcker “pursued a long-term strategy…knowing that it wasn’t going to happen in the next quarter,” Dr. Meltzer said during a panel discussion in 2010."

"Dr. Meltzer loathed the proliferation of regulations. Financial firms would sneak around them, and market changes would soon render the rules obsolete, he wrote. A wiser approach, he said, would be to require higher capital ratios for larger banks. That would deter banks from growing into behemoths deemed too big to fail, Dr. Meltzer said. If bankers “make the wrong calls,” he said in one interview, they and their shareholders “must be made to pay the price themselves.”

He worried about U.S. budget deficits and unfunded obligations. “At the city, state and federal government, we’ve promised people things that we aren’t going to be able to do,” he said in a presentation on one of his books. “We’re going to have to take away things that have been promised….We’re going to have higher tax rates and we’re going to have less spending.”

He deplored the congressional habit of leaving details to regulatory agencies. “Much regulation has the effect of replacing the rule of law with arbitrary decisions by lawyers and bureaucrats,” he wrote in a 2012 book, “Why Capitalism.”

Regulators were gaining too many powers to regulate as they saw fit, undermining the rule of law, he warned."

America’s trade deficit isn’t nearly as large as the official figures suggest because of where companies report their profits

See The True Trade Deficit: Halving the official figure gets closer to the truth, as the iPhone example shows by Martin Neil Baily and Adam Looney in the WSJ. They both are senior fellows at the Brookings Institution. Excerpts:
"Protectionists like to cite the U.S. trade deficit—last year imports of goods and services exceeded exports by $501 billion—as evidence that unfair trade agreements have hurt American competitiveness. But a new working paper from the Bureau of Economic Analysis, published in March, challenges this narrative: Turns out, America’s trade deficit isn’t nearly as large as the official figures suggest.

To illustrate this finding, the economists Fatih Guvenen, Raymond Mataloni, Dylan Rassier and Kim Ruhl examine the iPhone. The device is said to be “Designed by Apple in California. Assembled in China.” Yet to lower its tax bill, Apple reports that its iPhone profits were earned in neither place, but were instead accrued in some other country.

Assume an iPhone is assembled in China for $250 and sells in Europe and the U.S. for $750. Apple’s profit is $500. Often that economic value gets attributed to an Apple subsidiary set up in a low-tax nation like Ireland or Luxembourg.

If most iPhone development is actually done in California, most of the $500 represents American production and should be included in U.S. gross domestic product. Then, when an iPhone is sold in Europe, that value should count as an export from the U.S. When a phone is instead sold in the U.S., the net amount of the import should only be the $250 cost of manufacturing in Asia, since the rest is produced by Californians.

With this in mind, the study’s authors estimate how much American trade is mismeasured. Although the official trade deficit in 2012 was $537 billion, they conclude that U.S. exports were undercounted and imports overstated by a combined $280 billion. With this adjustment, the real trade deficit that year shrinks to $257 billion—or about 1.6% of GDP. Trade still isn’t balanced, but the deficit appears to be less than half the size everyone thought.

In other words, more than half the goods and services that were counted in the U.S. trade deficit actually were produced right here in America. This makes it harder to argue that an outsize trade deficit is responsible for American manufacturing’s woes. It’s true that traditional blue-collar workers have had trouble competing globally. But high-skilled American workers and the companies that employ them have been competing just fine."

Tuesday, May 23, 2017

Between 1960 and 2015, the population grew by 142%, from 3.035 billion to 7.35 billion. Yet commodity prices fell.

See Why the human brain is our most precious commodity by Marian L. Tupy of
"Between 1960 and 2015, world population increased by 142 per cent, rising from 3.035 billion to 7.35 billion. During that time, average income per capita adjusted for inflation increased by 177 per cent, rising from $3,680 to $10,194. Moreover, after 56 years of human use and exploration, the vast majority of the commodities tracked by the World Bank are cheaper than they used to be – either absolutely or relative to income. That was not supposed to have happened.

According to conventional wisdom, population growth was to be a harbinger of poverty and famine. Yet, human beings, unlike other animals, innovate their way out of scarcity by increasing the supply of natural resources or developing substitutes for overused resources. Human ingenuity, in other words, is “the ultimate resource” that makes all other resources more plentiful.

Earlier this year, the World Bank updated its Pink Sheet, which tracks the prices of 72 commodities going back (in most cases) to 1960. I have eliminated some repetitive datasets and some datasets that contained data for only very short periods of time. I was left with 42 commodity prices, which are included in the chart below.

As can be seen, out of the 42 distinct commodity prices measured by the World Bank, 19 have declined in absolute terms. In other words, adjusted for inflation, they were cheaper in 2016 than in 1960. Twenty-three commodities have increased in price over the last 56 years. However, of those 23 commodities, only three (crude oil, gold and silver) appreciated more than income. In a vast majority of cases, therefore, commodities became cheaper either absolutely or relatively.

Figure 1: Worldwide Commodity Prices, Population and Income, 1960-2016

It is often assumed that population growth must inevitably result in the exhaustion of natural resources, environmental destruction and even mass starvation. Take, for example, The Limits to Growth report, which was published by the Club of Rome in 1972.  Based on MIT computer projections, the report looked at the interplay between industrial development, population growth, malnutrition, the availability of nonrenewable resources and the quality of the environment. It concluded:
 “If present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next 100 years… The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity… Given present resource consumption rates and the projected increase in these rates, the great majority of currently nonrenewable resources will be extremely expensive 100 years from now.”
It has been 45 years since the publication of The Limits to Growth. So far, the dire predictions of the Club of Rome have not come to pass. On the contrary, we have seen an overall decline of commodity prices relative to income – in spite of a growing global population.

Can this happy trend continue for another 55 years and beyond? To get a glimpse of the future, we must first understand the concept of scarcity.

Scarcity or “the gap between limited – that is, scarce – resources and theoretically limitless wants”, is best ascertained by looking at prices. A scarce commodity goes up in price, while a plentiful commodity becomes cheaper. That was the premise of a famous bet between Stanford University Professor Paul Ehrlich and University of Maryland Professor Julian Simon. Ehrlich shared the gloomy predictions of the Club of Rome.

In his best-selling 1968 book The Population Bomb, Ehrlich reasoned that over-population would lead to exhaustion of natural resources and mega-famines. “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now. At this late date nothing can prevent a substantial increase in the world death rate,” he wrote.

Simon, in contrast, was much more optimistic. In his 1981 book The Ultimate Resource, Simon used empirical data to show that humanity has always gotten around the problem of scarcity by increasing the supply of natural resources or developing substitutes for overused resources. Human ingenuity, he argued, was “the ultimate resource” that would make all other resources more plentiful.

In 1980, the two thinkers agreed to put their ideas to a test. As Ronald Bailey wrote in his 2015 book The End of Doom: Environmental Renewal in the 2lst Century:
 “In October 1980, Ehrlich and Simon drew up a futures contract obligating Simon to sell Ehrlich the same quantities that could be purchased for $1,000 of five metals (copper, chromium, nickel, tin, and tungsten) ten years later at inflation-adjusted 1980 prices. If the combined prices rose above $1,000, Simon would pay the difference. If they fell below $1,000, Ehrlich would pay Simon the difference. Ehrlich mailed Simon a check for $576.07 in October 1990. There was no note in the letter. The price of the basket of metals chosen by Ehrlich and his cohorts had fallen by more than 50 percent. The cornucopian Simon won.”
Simon’s critics, Ehrlich included, have since argued that Simon got lucky. Had his bet with Ehrlich taken place over a different decade, the outcome might have been different. Between 2001 and 2008, for example, the world had experienced an unprecedented economic expansion that dramatically increased the price of commodities.

True, but Simon’s thesis does not have to account for price fluctuations that are heavily influenced by the ups and downs of the global economy as well as disruptive government policies (e.g., oil crises in 1973 and 1979). Rather, Simon posited that as a particular resource becomes scarcer, its price will increase and that will incentivize people to discover more of the resource, ration it, recycle it, or develop a substitute.

Commodity prices, academic research suggests, move in so-called “super-cycles,” lasting between 30 and 40 years. During periods of high economic growth, demand for commodities increases. When that happens, commodities go up in price. It is during this period that high commodity prices encourage the discovery of new supplies and the invention of new technologies. Once economic growth slows down, prices of “now copiously supplied commodities fall”.

Accordingly, the current commodity cycle seems to have peaked in 2008. In June 2008, for example, the price of West Texas Intermediate crude oil peaked at $154 per barrel. By January 2016 it stood at $29 (both figures are in inflation adjusted 2016 US dollars). The once-high price of oil has led to hydraulic fracturing, which has revolutionized the oil industry. Today, “fracking” continues to enable us to access previously inaccessible oil reserves in record volumes. In fact, humanity is yet to run out of a single “non-renewable” resource.

Unfortunately, many people, including Paul Ehrlich, and many organizations, including the Club of Rome, believe that the answer to scarcity is to limit consumption of natural resources. In reality, consumption limits are unpopular and difficult to enforce. More often than not, their effects fall hardest on the most vulnerable. A switch from fossil fuels to “renewable” sources of energy, for example, has increased the price of gas and electricity in many European countries to such extent that a new term – energy poverty – had to be coined.

According to the German magazine Der Spiegel, “Germany’s aggressive and reckless expansion of wind and solar power has come with a hefty price tag for consumers, and the costs often fall disproportionately on the poor.”  In democracies, such policies are, in the long run, unsustainable. More important is the fact that they are unnecessary, because real solutions to future scarcity are more likely to come from innovation and technological change.

I do not mean to trivialize the challenges that humanity faces or imply that we will be able to solve all of the problems ahead. Instead, I want to suggest that human brain, the ultimate resource, is capable of solving complex challenges. We have done so with disease, hunger and extreme poverty, which have all fallen to historical lows, and we can do so with respect to the use of natural resources as well."

How British Columbia cut regulations by almost 30 percent & turned its economy around

See Using Regulatory Reform to Boost Growth by James Broughel of Mercatus. Excerpts:
"Regulatory reform could be a form of low-hanging fruit to boost growth at a time when state and federal budgets are pinched. The experience of the Canadian province of British Columbia offers a model for how this can be done. In 2001, the province began a red tape cutting effort, with a goal of reducing regulatory requirements by a third within three years. In June of 2001, the province had 382,139 requirements in place. By March of 2004, that number had fallen to 268,699—a decline of almost exactly 30 percent.

As the first chart illustrates, in the years leading up to the reform, British Columbia was experiencing a “dismal decade”—a phrase used to describe the sluggish economy in the province around that time. Real GDP in the province grew, on average, 1.9 percent less than Canada’s between 1994 and 2001. Meanwhile, growth shot up in the years after the reform began. British Columbia experienced a rebound, and growth was 1.1 percent higher per year, on average, than Canada’s between 2002 and 2006.

The absolute numbers make British Columbia’s improvement in economic performance more clear, as the second chart shows. In the 1994–2001 period, real GDP grew on average by 2.6 percent per year; this jumped to 3.8 percent in the 2002–2006 period. This difference is statistically significant (p=.08). A difference of just over one percentage point in growth might not sound like a lot, but consider the following: An economy that grows at 1 percent per year will double in size roughly every 70 years, but an economy growing at 2 percent takes half the time to double—just 35 years. An economy growing at 4 percent will double GDP in a mere 18 years."

Monday, May 22, 2017

Wind Power Is Providing Very Little Of The World's Energy With Some Serious Uwnanted And Unintended Consequences

"The Global Wind Energy Council recently released its latest report, excitedly boasting that ‘the proliferation of wind energy into the global power market continues at a furious pace, after it was revealed that more than 54 gigawatts of clean renewable wind power was installed across the global market last year’.
You may have got the impression from announcements like that, and from the obligatory pictures of wind turbines in any BBC story or airport advert about energy, that wind power is making a big contribution to world energy today. You would be wrong. Its contribution is still, after decades — nay centuries — of development, trivial to the point of irrelevance.

Even put together, wind and photovoltaic solar are supplying less than 1 per cent of global energy demand. From the International Energy Agency’s 2016 Key Renewables Trends, we can see that wind provided 0.46 per cent of global energy consumption in 2014, and solar and tide combined provided 0.35 per cent. Remember this is total energy, not just electricity, which is less than a fifth of all final energy, the rest being the solid, gaseous, and liquid fuels that do the heavy lifting for heat, transport and industry.

[One critic suggested I should have used the BP numbers instead, which show wind achieving 1.2% in 2014 rather than 0.46%. I chose not to do so mainly because that number is arrived at by falsely exaggerating the actual output of wind farms threefold in order to take into account that wind farms do not waste two-thirds of their energy as heat; also the source is an oil company, which would have given green blobbers a excuse to dismiss it, whereas the IEA is unimpleachable But it's still a very small number, so it makes little difference.]

Such numbers are not hard to find, but they don’t figure prominently in reports on energy derived from the unreliables lobby (solar and wind). Their trick is to hide behind the statement that close to 14 per cent of the world’s energy is renewable, with the implication that this is wind and solar. In fact the vast majority — three quarters — is biomass (mainly wood), and a very large part of that is ‘traditional biomass’; sticks and logs and dung burned by the poor in their homes to cook with. Those people need that energy, but they pay a big price in health problems caused by smoke inhalation.

Even in rich countries playing with subsidised wind and solar, a huge slug of their renewable energy comes from wood and hydro, the reliable renewables. Meanwhile, world energy demand has been growing at about 2 per cent a year for nearly 40 years. Between 2013 and 2014, again using International Energy Agency data, it grew by just under 2,000 terawatt-hours.

If wind turbines were to supply all of that growth but no more, how many would need to be built each year? The answer is nearly 350,000, since a two-megawatt turbine can produce about 0.005 terawatt-hours per annum. That’s one-and-a-half times as many as have been built in the world since governments started pouring consumer funds into this so-called industry in the early 2000s.

At a density of, very roughly, 50 acres per megawatt, typical for wind farms, that many turbines would require a land area [half the size of] the British Isles, including Ireland. Every year. If we kept this up for 50 years, we would have covered every square mile of a land area [half] the size of Russia with wind farms. Remember, this would be just to fulfil the new demand for energy, not to displace the vast existing supply of energy from fossil fuels, which currently supply 80 per cent of global energy needs. [para corrected from original.]

Do not take refuge in the idea that wind turbines could become more efficient. There is a limit to how much energy you can extract from a moving fluid, the Betz limit, and wind turbines are already close to it. Their effectiveness (the load factor, to use the engineering term) is determined by the wind that is available, and that varies at its own sweet will from second to second, day to day, year to year.

As machines, wind turbines are pretty good already; the problem is the wind resource itself, and we cannot change that. It’s a fluctuating stream of low–density energy. Mankind stopped using it for mission-critical transport and mechanical power long ago, for sound reasons. It’s just not very good.
As for resource consumption and environmental impacts, the direct effects of wind turbines — killing birds and bats, sinking concrete foundations deep into wild lands — is bad enough. But out of sight and out of mind is the dirty pollution generated in Inner Mongolia by the mining of rare-earth metals for the magnets in the turbines. This generates toxic and radioactive waste on an epic scale, which is why the phrase ‘clean energy’ is such a sick joke and ministers should be ashamed every time it passes their lips.

It gets worse. Wind turbines, apart from the fibreglass blades, are made mostly of steel, with concrete bases. They need about 200 times as much material per unit of capacity as a modern combined cycle gas turbine. Steel is made with coal, not just to provide the heat for smelting ore, but to supply the carbon in the alloy. Cement is also often made using coal. The machinery of ‘clean’ renewables is the output of the fossil fuel economy, and largely the coal economy.

A two-megawatt wind turbine weighs about 250 tonnes, including the tower, nacelle, rotor and blades. Globally, it takes about half a tonne of coal to make a tonne of steel. Add another 25 tonnes of coal for making the cement and you’re talking 150 tonnes of coal per turbine. Now if we are to build 350,000 wind turbines a year (or a smaller number of bigger ones), just to keep up with increasing energy demand, that will require 50 million tonnes of coal a year. That’s about half the EU’s hard coal–mining output.

Forgive me if you have heard this before, but I have a commercial interest in coal. Now it appears that the black stuff also gives me a commercial interest in ‘clean’, green wind power.

The point of running through these numbers is to demonstrate that it is utterly futile, on a priori grounds, even to think that wind power can make any significant contribution to world energy supply, let alone to emissions reductions, without ruining the planet. As the late David MacKay pointed out years back, the arithmetic is against such unreliable renewables.

MacKay, former chief scientific adviser to the Department of Energy and Climate Change, said in the final interview before his tragic death last year that the idea that renewable energy could power the UK is an “appalling delusion” -- for this reason, that there is not enough land.

The truth is, if you want to power civilisation with fewer greenhouse gas emissions, then you should focus on shifting power generation, heat and transport to natural gas, the economically recoverable reserves of which — thanks to horizontal drilling and hydraulic fracturing — are much more abundant than we dreamed they ever could be. It is also the lowest-emitting of the fossil fuels, so the emissions intensity of our wealth creation can actually fall while our wealth continues to increase. Good.

And let’s put some of that burgeoning wealth in nuclear, fission and fusion, so that it can take over from gas in the second half of this century. That is an engineerable, clean future. Everything else is a political displacement activity, one that is actually counterproductive as a climate policy and, worst of all, shamefully robs the poor to make the rich even richer."

Stringent restrictions to new housing supply lowered aggregate US growth by more than 50% from 1964 to 2009.

See The new Hsieh and Moretti paper on land use restrictions and economic growth. From Marginal Revolution.
"We quantify the amount of spatial misallocation of labor across US cities and its aggregate costs. Misallocation arises because high productivity cities like New York and the San Francisco Bay Area have adopted stringent restrictions to new housing supply, effectively limiting the number of workers who have access to such high productivity. Using a spatial equilibrium model and data from 220 metropolitan areas we find that these constraints lowered aggregate US growth by more than 50% from 1964 to 2009.
Here is the pdf, via the excellent LondonYIMBY.  Here is a related estimate from two days ago."

Sunday, May 21, 2017

The Value of Access: How Closeness to the Obama White House Benefited Companies

By Guy Rolnik of the Stigler Center.

"Between January 2009 and December 2015, White House officials met with corporate CEOs 2,286 times. A new study, to be presented at the upcoming Stigler Center conference on the political economy of finance, shows that access to the White House has several economic benefits.

Since Donald Trump assumed office, there has been a dramatic increase in the reporting and discussion on crony capitalism in the U.S.: Trump’s conflicts of interest, his pro-business policies, his frequent meetings with business executives, and the assertion that, more than any other president, he is tuned in to the interests of big business.

But are all these issues unique to Trump? Was the Obama administration completely different?
A new paper
 by Jeffrey Brown and Jiekun Huang of the University of Illinois at Urbana-Champaign, “All the President’s Friends: Political Access and Firm Value,” is trying to answer that question using White House visitor logs from January 2009 through December 2015, when President Barack Obama was in office. The paper will be presented during the Stigler Center conference on the political economy of finance, which will be held between June 1-2.   

Before asking ourselves if the methodology and interpretation are convincing, it is worth starting with two anecdotes, both mentioned in the paper.

The first is about Google. The backdrop is an antitrust investigation against Google by the FTC that culminated in an August 2012 FTC document that recommended suing Google for certain practices. Immediately after that, Google executives had a series of meetings with FTC and White House officials: for example, Google CEO Larry Page (currently the CEO of Alphabet, Google’s parent company) met with FTC officials, and the company’s executive chairman Eric Schmidt met with Pete Rouse, a senior adviser to President Obama, in the White House. Following those meetings, the FTC closed its investigation after Google agreed to make some changes to its business practices.

The FTC decision to close the investigation was probably not a direct result of these meetings. After analyzing White House visitor logs, the Wall Street Journal found that Google possibly had unprecedented access to the White House: since Obama took office, Google employees have visited the White House for meetings with senior officials about 230 times—on average, roughly once a week.

The second anecdote is related to General Electric (GE). GE and the Obama White House seemed to be very close. In a July 1, 2010, piece in the Washington Examiner, Timothy Carney wrote:

“Except for maybe Google, no company has been closer and more in synch with the Obama administration than General Electric. First, there’s the policy overlap: Obama wants cap-and-trade, GE wants cap-and-trade. Obama subsidizes embryonic stem-cell research, GE launches an embryonic stem-cell business. Obama calls for rail subsidies, GE hires Linda Daschle as a rail lobbyist. Obama gives a speech, GE employee Chris Matthews feels a thrill up his leg. I could go on.”
Behind this, wrote Carney, is the close relationship between GE CEO Jeff Immelt and President Obama: Immelt sat on Obama’s Economic Recovery Advisory Board and was asked by Obama’s Export-Import Bank to be the opening act for the president at an Ex-Im conference. And this may be just the tip of the iceberg.

The most frequent visitors

The reason that the authors chose to focus on the Obama administration is that it was simply the only administration to voluntarily release its visitor logs. Previous administrations didn’t do so, and the Trump administration is unlikely to do so either.

Using the visitor logs, Brown and Huang were able to identify 2,286 meetings between corporate executives from members of the S&P 1500 stock index and White House officials during the seven-year period between January 2009 through December 2015. 

As can be clearly seen from panel A of Table 1 above, which includes all the executives that had at least 10 meetings in the period studied, the three most frequent visitors were Honeywell’s David Cote (30 visits), GE’s Immelt (22 visits) and EverCore’s Roger Altman (21 visits). On average, Cote had meetings in the Obama White House once every 2.8 months.

Panel B, which includes the list of White House officials who had the most meetings with business executives, reveals that the most visited officials were Valerie Jarrett (Senior Advisor and Assistant to the President for Intergovernmental Affairs and Public Engagement), Jeff Zients (Assistant to the President for Economic Policy and Director of the National Economic Council) and President Obama himself. On average, Jarrett met with corporate executives once every 24 days.

In assessing the degree of access that S&P 1500 executives had to the White House during the period studied, Brown and Huang found that firm-years in which the executives visit the White House account for around 11.4 percent of the sample, suggesting that a non-trivial fraction of the firms have political access. Also, since firms with political access are typically larger, they account for about 40 percent of the total market capitalization of firms in the sample.

Campaign contributions and lobbying ‘buy’ access

As access to the White House is still a scarce resource, what are the factors related to better access to it?

Brown and Huang regressed access against a series of firm characteristics and found that, as defined by the model they used, an increase in campaign contributions increases the probability of gaining access to the White House by 2.4 percentage points. Since the unconditional probability (firm-years in which the executives visit the White House) is 11.4 percent, 2.4 percentage points is a significant increase. This, the authors write, is “consistent with the notion that campaign contributions ‘buy’ political access.”

Their model also indicates that firms that spend more on lobbying, receive more government contracts, and have a large market share are also associated with an increased probability of gaining access to the White House.

Economic gains from access

Access to the White House, Brown and Huang found, has several economic benefits.

One is a positive effect on government contracts: the average firm generates about $34 million in profits from incremental contract volume due to political access.

Another is regulatory relief. Using a dataset of news articles which were characterized as positive or negative (based on the relative fraction of positive and negative words in the articles), Brown and Huang matched the articles with White House visits and found that “treatment firms, relative to control firms, experience an increase of 0.036 in the number of positive regulatory news articles during the 12 months immediately following a White House visit relative to that during the 12 months immediately before the visit.”

These results, they write, are “in line with the hypothesis that political access enables firms to obtain regulatory relief.”

These and other access benefits are encapsulated in stock prices. Brown and Huang checked excess stock returns—specifically, cumulative abnormal returns (CARs) around corporate executives’ visits to the White House.

“What we find is that these meetings tend to be associated with significant increase in firms’ stock prices,” says Huang. “We also find that these companies are able to secure more favorable regulatory access after the meetings.”

Cumulative abnormal returns in the days around corporate executives’ White House visits

The results show positive and statistically significant CARs for four timeframes checked by the authors. The results also indicate that the highest CARs, 2.749 percent, occurred following a 70-day window around a meeting with Obama’s top aidesslightly higher than following a meeting with Obama himself. Since the average market capitalization of the firms included in the study’s sample is about $36 billion, a 2.749 percent CAR is, on average, equivalent to an almost $1 billion increase in market capitalization.

Cumulative abnormal returns in the days around corporate executives’ White House visits by year from 2009 through 2015. Source: Brown and Huang (2017).

The excess returns are related to election cycles: The CARs were significantly positive during the 2012 general election year, the first years after a general election (2009 and 2013), as well as 2014 (Figure 2). These results indicate that access to influential government officials is particularly beneficial during those years.

Indeed, confounding factors may be present, but they were largely treated by Brown and Huang. For example, they exclude White House visits that are associated with the president’s advisory board meetings and excluded follow-up visits.

To address possible concerns about omitted variables that drive both the timing of corporate executives’ meetings with federal officials and stock returns, Brown and Huang used the election of Donald Trump as a shock, as up until the very last moment, his Democratic opponent, Hillary Clinton, was widely expected to win.

Therefore, if the basic notion of their theory is correct, we would expect shares of the companies that had access to the Obama White House to underperform as soon as the election results were announced. And, indeed, that is what they found: in all four models that they constructed—in which they checked the cumulative market-adjusted abnormal returns of these stocks from November 9, 2016, to November 11, 2016—the stocks of the companies that had access to Obama’s White House showed statistically significant underperformance in a range that runs between 80 basis points and 130 basis points.

A similar test of Republican administrations, says Huang, will likely yield similar results. “Before Donald Trump took office, he had a meeting with the CEO and founder of Alibaba, Jack Ma. Upon the release of the news that Trump was meeting with Jack Ma, Alibaba’s stock price increased by 1.5 percent, which is very much consistent with what we observe.”

The Trump administration could be worse in this respect than that of Obama. One reason is his policies so far. Another is that, just recently, the White House announced that it would no longer make its visitor logs available to the public. The White House is not obligated to make the logs public, but the move broke with Obama’s practice. The White House cited privacy and national security concerns. In fact, those issues were already addressed by the Obama White House by redacting visits that were tied to national security issues, other particularly sensitive issues, and private visits that were not related to the business of governing."

Government Can’t Even Plan for Its Own Survival

By David Boaz of Cato.
"Economists and (classical) liberals have long criticized the failures of government planning, from Hayek and Mises and John Jewkes to even Robert Heilbroner. Ron Bailey wrote about centralized scientific planning, Randal O’Toole about urban planning, Jim Dorn about the 1980s enthusiasm for industrial planning, and I noted the absurdities of green energy planning.

One concern about planning is that it will lead government to engage in favoritism and cronyism. So who would have guessed that when the leaders of the federal government set out to plan for their own survival—if no one else’s—in the event of nuclear attack, they failed?

That’s the story journalist and author Garrett Graff tells in his new book Raven Rock: The Story of the U.S. Government’s Secret Plan to Save Itself—While the Rest of Us DieAs the Wall Street Journal summarizes:
COG—continuity of government—is the acronymic idée fixe that has underpinned these doomsday preparations. A bunker was installed in the White House after Pearl Harbor, but the nuclear age (particularly after the Soviet Union successfully tested an atomic bomb in September 1949) introduced a nationwide system of protected hideaways, communications systems, evacuation procedures and much else of a sophistication and ingenuity—and expense—never before conceived….
Strategies for evacuating government VIPs began in earnest in the early 1950s with the construction of Raven Rock, an “alternate Pentagon” in Pennsylvania near what would become known as Camp David, and Mount Weather, a nuclear-war sanctuary in Virginia for civilian officials….
In 1959, construction began on a secret refuge for Congress underneath the Greenbrier, a resort in West Virginia. In the event of an attack, members of Congress would have been delivered by special train and housed in dormitories with nameplated bunk beds.
The most important COG-related activities during the Kennedy administration came during the Cuban Missile Crisis in October 1962, the closest this country has come to a nuclear war. Not only was the military mobilization chaotic—“one pilot bought fuel for his bomber with his personal credit card”—but VIP evacuation measures were, for the most part, a debacle: “In many cases, the plans for what would happen after [a nuclear attack on the U.S.] were so secret and so closely held that they were almost useless.” …
The Air Force also acquired, for the president’s use, four Boeing 747 “Doomsday planes” with state-of-the-art communications technology, which were nicknamed “Air Force One When It Counts.”…
Probably the most fraught 24 hours in the history of COG worrying occurred on Sept. 11, 2001, when al Qaeda terrorists attacked the World Trade Center and the Pentagon. COG projects and training had been ceaselessly initiated and honed for a half-century; but, as Mr. Graff writes with impressive understatement, “the U.S. government [wasn’t] prepared very well at all.”…
While Vice President Dick Cheney had been swiftly hustled to the White House bunker, “those officials outside the bunker, even high-ranking ones, had little sense of where to go, whom to call, or how to connect back to the government,” Mr. Graff writes. But there were enough people in the bunker to deplete the oxygen supply and raise the carbon-dioxide level, and so “nonessential staff” were ordered to leave. When House Speaker Dennis Hastert tried to call Mr. Cheney on a secure phone, he couldn’t get through….
When President George W. Bush heard the news about the attacks that morning, he was in Florida. He was whisked into Air Force One, which, Mr. Graff notes, “took off at 9:54 a.m., with no specific destination in mind.” It would eventually land, and the president would address the country. But “Air Force One’s limitations”—it wasn’t one of the Doomsday planes—“came into stark relief.” For one thing the plane’s communications systems were woefully inadequate for what was required on 9/11. “On the worst day in modern U. S. history,” Mr. Graff writes near the end of his exhaustingly detailed account (I sometimes felt buried alive under its mass of data), “the president of the United States was, unbelievably, often less informed than a normal civilian sitting at home watching cable news.”
Fifty years of planning for a single event, the most important task imaginable—the survival of the republic and their own personal survival—and top government officials still didn’t get it right. A good lesson to keep in mind when we contemplate having less-motivated government officials plan our cities, our energy production, our health care system, or our entire economy."

Saturday, May 20, 2017

Stolper-Samuelson predicts that the wages in America that will disproportionately rise when trade becomes freer are chiefly those earned by middle-income workers

Here I Take a Minority Position on the Prediction of Stolper-Samuelson by Don Boudreaux. See also Clarification and Elaboration on Stolper-Samuelson.

Here’s a letter to the Wall Street Journal:
Reviewing Roger Backhouse’s biography of the economist Paul Samuelson, Eric Maskin writes that “The Stolper-Samuelson Theorem implies that international trade causes inequality between high-skilled and less-skilled workers to grow in rich countries.  The theorem was derived in 1941 but clearly remains relevant in today’s America of rising inequality” (“An Einstein of the Dismal Science,” May 20).
Not so fast.
When applied to labor, the Stolper-Samuelson Theorem predicts that the workers whose wages fall as a result of freer trade are (in econ jargon) the relatively more scarce factor of production – which, in America, is less-skilled workers – and that the workers whose wages rise are the relatively more abundant factor of production.  In plain language, while the workers in America whose wages are reduced by freer trade are indeed the lowest paid, they also are a minority of workers.  Freer trade raises the wages of those workers whose skill-levels are relatively most abundant.  Because the workers in America whose skill-levels are most abundant likely are those whose incomes are in or near the middle of the income distribution for workers, Stolper-Samuelson predicts that the wages in America that will disproportionately rise when trade becomes freer are chiefly those earned by middle-income workers.
Yet it is difficult to see how a change in the wages distribution with a disproportionate amount of the gains going to middle-income workers increases income inequality.
 Therefore, to the extent that the Stolper-Samuelson Theorem applies in reality, it tells us that whatever increase in income inequality has occurred over the past several decades is likely not due to the effect that freer trade has on the distribution of workers’ wages.
Donald J. Boudreaux
Professor of Economics
Martha and Nelson Getchell Chair for the Study of Free Market Capitalism at the Mercatus Center
George Mason University
Fairfax, VA  22030
Note that in this letter I do not argue that income inequality has not increased in the United States.  Instead, I argue that whatever increase in income inequality there might have been in the U.S. is not as straightforwardly explained by – or even consistent with – the Stopler-Samuelson Theorem as many people today (such as Eric Maskin) presume."

Causation clearly runs from tight money to falling NGDP to financial distress

See Financial crisis or monetary policy failure? by Scott Sumner.

"I often debate the question of whether severe slumps are caused by financial crisis or tight money. In my view it's usually tight money, with financial stress being a symptom of falling NGDP. So how would we test my hypothesis?

While cleaning out my office at Bentley, I came across an old NYT article from June 11, 1933:
Wall Street notes a remarkable contrast between the attitude toward the war debt question last December and that of the present time. Last year, financial circles began to become apprehensive about the war debt question long before December 15. By late November the pound sterling had fallen to a record low of $3.14 1/2 and the financial markets were severely depressed. At the present time, although the war debts payments are due by next Thursday, there has been almost no discussion of the subject in financial circles, and the possibilities of wholesale default have left the markets unperturbed.
Why did the markets suddenly stop caring about the war debts issue in June 1933? For the same reason they suddenly started caring about the war debts issue in mid-1931. War debts disturbed the financial markets when they led to devaluation fears, which triggered massive gold hoarding. By June 1933, the US was off the gold standard, and hence gold hoarding no longer exerted a deflationary impact on the US. However, gold hoarding continued to be a problem for countries still on the gold standard, such as France. 
In my book entitled "The Midas Paradox", I did a very extensive empirical study of this question. The price of German war debt bonds suddenly become highly correlated with US stock indices in mid-1931 (when Germany got into financial trouble), and this continued through 1932. Fears of German default were triggering a loss of confidence in the international gold standard. That loss of confidence was justified, as Germany adopted exchange controls in July 1931 and the UK devalued in September 1931. At that point people started worrying about a US devaluation, and gold hoarding rose sharply.

Because the supply of newly mined gold doesn't change very much from year to year, big changes in the value of gold are primarily caused by shifts in gold demand. But once the US began devaluing the dollar in April 1933, increases in gold demand no longer had a significant deflationary impact on the US. Gold kept getting more valuable, but now the dollar was losing value. (Recall that price deflation means that money is getting more valuable.)

Back in 1932, the vast majority of serious people rejected my "tight money" explanation of the Depression. It was "obviously" caused by financial turmoil, both domestic and international. Falling NGDP was seen as a symptom. Only a few lonely exceptions like Irving Fisher and George Warren took a "market monetarist" perspective, urging a shift toward expansionary monetary policy. Because we were near the zero bound, they recommended a depreciation of the dollar against gold. In 1933, FDR adopted their suggestion, and it worked just as Warren and Fisher predicted---prices and output immediately began rising sharply. The policy would have been even more effective if not offset by the NIRA, which sharply reduced aggregate supply.

And there is lots more evidence for the tight money--->falling NGDP---> financial distress chain of causation. After the dollar started depreciating against gold in April 1933, domestic bank failures ceased almost immediately.

Some people claim that tight money did not cause the Great Recession, because there was no alternative monetary policy at the zero bound of interest rates. But something similar occurred in the 1980s, when we were not at the zero bound. Between 1934 and 1980, there was a period of calm in the banking system. Some people wrongly attribute that to regulation, but in fact it was caused by higher rates of inflation and NGDP growth during 1934-80, which made it easier for debts to be repaid. As soon as the Fed adopted a tight money policy in 1981, and NGDP growth began slowing sharply, we experienced a bout of bank failures (mostly S&Ls). The causation in this case clearly went from tight money to sharply slower NGDP growth to banking distress, as we were not even close to the zero lower bound on interest rates.

Screen Shot 2017-05-19 at 10.34.22 AM.png
To summarize, the question of whether tight money or financial distress causes deep slumps might seem almost unsolvable, if you simply focus on the Great Recession. But those with a deep knowledge of economic history know that causation clearly runs from tight money to falling NGDP to financial distress. Unfortunately, economic history is no longer widely taught in our graduate programs, so we now have an entire generation of economists who are ignorant of this subject, and who keep developing business cycle models that are easily refuted by the historical record."

Friday, May 19, 2017

Subsidizing sports teams is a bad play

See What Prince William County can learn from the Oakland Raiders by Tyler Muench in The Washington Post. Tyler Muench is Northern Virginia director with Americans for Prosperity. Excerpts:
"Professional sports teams have been relocating to new cities when they fail to acquire public funding for stadiums. Last year, the Rams stuck St. Louis with a $144 million bill after the team decided to move to Los Angeles. And earlier this year, San Diego taxpayers were left with a $50 million tab after the Chargers joined the Rams in L.A.

This time around is no different. The Oakland Raiders’ move to Las Vegas will leave Oakland taxpayers stuck with a $163 million bill. Teams constantly ask taxpayers for handouts despite generating vast revenues. Billionaire owners get publicly financed stadiums and the working-class citizens pick up the tab — corporate welfare at its worst.

It’s understandable that cities want to attract professional sports teams. People of all ages love sports, and teams often define a community’s identity. But local governments can’t abandon all logic and principle to secure a team. That’s what Oakland did, and it didn’t work out.

The San Francisco Chronicle reports that the original $200 million bond that brought the Raiders to Oakland will cost $350 million. When asked about the bond, Oakland City Council President Larry Reid acknowledged it was a bad deal. “The projections were off, but everyone was just caught up in the emotions of having the Raiders return.”"

"Proponents of taxpayer-funded stadiums insist that stadiums are economic engines for communities and a wise investment for taxpayers. But economists from around the country disagree. A 2015 study from the Mercatus Center at George Mason University found these projects provide little to no economic benefit for their communities.

Economist Victor Matheson of Holy Cross was more to the point: “Whatever number the sports promoter says, take it and move the decimal one place to the left. Divide it by 10, and that’s a pretty good estimate of the actual economic impact.”"

Here are the key findings from the Mercatus study:

  • Professional sports can have some impact on the economy. Looking at all the sports variables, including presence of franchises, arrival and departure of clubs in a metropolitan area, and stadium and arena construction, the study finds that the presence of a franchise is a statistically significant factor in explaining personal income per capita, wage and salary disbursements, and wages per job.
  • But this impact tends to be negative. Individual coefficients, such as stadium or arena construction, sometimes have no impact, but frequently indicate harmful effects of sports on per capita income, wage and salary disbursements, and wages per job. When the effect of these coefficients appears to be positive, it is generally so small as to be insignificant.
  • At most, sports account for less than 5 percent of the local economy. Though sports are often perceived as a major economic force, sports at most account for less than 5 percent of the local economy, with the majority of estimates putting that number under 1.5 percent. Simply stated, sports teams are not the star players in local economies.

How Banks Game The Community Reinvestment Act By Putting Branches In Neighborhoods That Only Look Low Income

See Never Mind the Ferrari Showroom, Bank Regulators Call This a Poor Neighborhood: Branches in business districts are given low-income designation by a quirk in federal law by Rachel Louise Ensign and AnnaMaria Andriotis of the WSJ. Banks need to pass a test that they are in compliance or they might not be allowed to engage in mergers. Excerpts:
"To any casual observer, the area just south of Trump Tower in Midtown Manhattan is obviously wealthy: The blocks are crowded with skyscrapers, and stores include Versace and Ferrari. Diners can pick at the foie gras and caviar on La Grenouille’s $172 prix-fixe dinner menu.

In the eyes of federal-bank regulations, though, that sliver of New York City is a poor neighborhood where median incomes are relatively low.

The anomaly has yielded a hidden benefit for banks such as J.P. Morgan Chase & Co. and Wells Fargo WFC 1.30% & Co. that have crowded branches into the area. Having robust branch representation in supposedly low-income areas gives them a better score on a key regulatory test that can help determine how fast they expand.

"Neighborhoods like the one in Midtown Manhattan could add a new dimension to the debate, even though branch analysis is only one part of regulators’ broader CRA evaluations. Its quirky treatment under the CRA is due to the fact that regulators who enforce the act rely on older, sometimes unreliable, Census Bureau data to determine an area’s income level.

New York isn’t an isolated example. Six of the 10 most popular poor areas for banks to have branches, including the Manhattan tract, are slated to lose that classification when more recent census data go into effect this year, according to regulators and data from fair-lending software company ComplianceTech. But bank regulators have been using the older data because they stick to a preset schedule of switching every five years.

In one of these census tracts, a “low income” area in downtown San Francisco, one of the most expensive cities in the country, 53 branches pack into an area that census data indicate has only 1,783 residents. That’s 52 more branches than the average poor district in the U.S. has, despite the fact the San Francisco tract has far fewer residents than average.

About 30 miles away in Menlo Park, Calif., a First Republic Bank branch on Facebook Inc.’s corporate campus is classified as lower income because the surrounding areas have lower incomes than the median of the broader area. But the only people with access to the branch are employees and guests of Facebook"

"Then there is Manhattan’s census tract 102, the bustling Midtown blocks with the most lower-income bank branches per capita in the U.S., according to a Wall Street Journal analysis of data from ComplianceTech’s Due to a paucity of residential buildings in the area, the district bordered by Park and Fifth avenues, 49th and 56th streets has only 230 residents, or about 10 for each of the 22 bank branches that call the tract home, according to the census data used by banking regulators."

"But the areas often have few residential buildings, one reason that can explain the lower-income CRA designation. The fact that banks get credit for branches in these areas, though, has prompted some criticism.

“Banks can conform to the letter of the law, but not meet the purpose of CRA,” says John Vogel, an adjunct professor at Dartmouth’s Tuck School of Business. This is especially the case, he says, in the classification of “low- and moderate-income neighborhoods.”"

Thursday, May 11, 2017

Long-term fate of tropical forests may not be so dire

By Lisa Marshall. From CU Boulder Today.
"Tropical rainforests are often described as the “lungs of the earth,” able to essentially inhale carbon dioxide from the atmosphere and exhale oxygen in return. The faster they grow, the more they mitigate climate change by absorbing CO2.

This role has made them a hot research topic, as scientists question what will happen to this vital carbon sink long-term as temperatures rise and rainfall increases.
Conventional wisdom has held that forest growth will dramatically slow with high levels of rainfall. But CU Boulder researchers this month turned that assumption on its head with an unprecedented review of data from 150 forests that concluded just the opposite.

“Our data suggest that as large-scale climate patterns shift in the tropics, and some places get wetter and warmer, forests will accelerate their growth, which is good for taking carbon out of the atmosphere,” said Philip Taylor, a research associate with the Institute of Arctic and Alpine Research (INSTAAR). “In some ways, this is a good-news story, because we can expect greater CO2 uptake in tropical regions where rainfall is expected to increase. But there are a lot of caveats.”

Ecologists have long thought that forest growth follows a hump-shaped curve when it comes to precipitation: To a point, more rainfall leads to more growth. But after about 8 feet per year, it was assumed too much water can waterlog the ecosystem and slow the growth rate of forests. While working in the Osa Peninsula of Costa Rica, Taylor, who got his doctoral degree in ecology and evolutionary biology at CU Boulder, began to question this assumption.

“Here we were in a place that got 16 feet of rain per year, and it was one of the most productive and carbon-rich forests on Earth. It clearly broke from the traditional line of thinking,” he said.
Intrigued, Taylor spent four years synthesizing data on temperature, rainfall, tree growth and soil composition from rainforests in 42 countries, compiling what he believes is the largest pan-tropical database to date.

The study, published recently in the journal Ecology Letters, found that cooler forests (below 68 degrees F on average), which make up only about 5 percent of the tropical forest biome, seemed to follow the expected hump-shaped curve. But warmer forests, which are in the majority, did not.
“The old model was formed with a lack of data from warm tropical forests,” said Taylor, who describes such remote, often uninhabited forests as among the “final frontiers” of scientific exploration. “It turns out that in the big tropical forests that do the vast majority of the ‘breathing’ the situation is flipped. Instead of water slowing growth down, it accelerates it.”

Taylor cautioned this does not mean climate change won’t negatively impact tropical forests at all. In the short term, research has shown, droughts in the Amazon Basin have already led to widespread plant death and a 30 percent decrease in carbon accumulation in the past decade.

“A lot of climate change is happening at a pace far quicker than what our study speaks to,” he says. “Our study speaks to what we can expect forests to do over hundreds of years.”

Because the carbon cycle is complex, with forests also releasing carbon into the atmosphere as plants die, it’s still impossible to say what the net impact of a wetter climate might mean for the forest’s ability to sequester carbon, said senior author Alan Townsend, a professor of environmental studies.
“The implications of the change still need to be worked out, but what we can say is that the forest responds to changes in rainfall quite differently than what has been a common assumption for a long time,” said Townsend.

Going forward, the authors hope the findings will set the record straight for educators and scientists.
Our findings fundamentally change a view of the tropical forest carbon cycle that has been published in textbooks and incorporated into models of future climate change for years,” said Taylor. “Given how much these forests matter to the climate, these new relationships need to be a part of future climate assessments.”"