Gold shaped our country’s monetary policy—and Americans’ fantasies of wealth—for nearly four centuries. James Grant reviews One Nation Under Gold by James Ledbetter.
It’s no work at all to make modern money. Since the start of the 2008 financial crisis, the world’s central bankers have materialized the equivalent of $12.25 trillion. Just tap, tap, tap on a computer keypad.
One Nation Under Gold is a brief against the kind of money you have to dig out of the ground. And you do have to dig. The value of all the gold that’s ever been mined (and which mostly still exists in the form of baubles, coins and ingots), according to the World Gold Council, is a mere $7.4 trillion.
Gold anchored the various metallic monetary systems that existed from the 18th century to 1971. They were imperfect, all right, just as James Ledbetter bends over backward to demonstrate. The question is whether the gold standard was any more imperfect than the system in place today.
Republican William McKinley, who campaigned for ‘sound money,’ signed the gold standard into law in 1900.
That system features monetary oversight by former university economics faculty—the Ph.D. standard, let’s call it. The ex-professors buy bonds with money they whistle into existence (“quantitative easing”), tinker with interest rates, and give speeches about their intentions to buy bonds and tinker with interest rates (“forward guidance”).
You wonder how the Ph.D. standard came to eclipse a system whose very name, “gold standard,” is a byword for excellence. Addressing a national television audience on Sunday evening, Aug. 15, 1971, President Richard Nixon announced the temporary suspension of the dollar’s convertibility into gold. No more would foreign governments enjoy the right to trade in their greenbacks for bullion at the then standard rate of $35 to the ounce. (Americans had long since relinquished that right; indeed, as Nixon spoke, they could not legally own gold.) Roughly a half-century later, the temporary suspension is beginning to look permanent.
Up until the Nixon edict, paper money, under the law, was a kind of derivative. It derived its value from the metal into which it was convertible. Today’s dollar is inconvertible. To be sure, you can exchange Federal Reserve notes for gold coins or bitcoins to your heart’s desire, but the rate of exchange is whatever the market will bear. Under a gold standard, fixedness was the great monetary virtue. Nowadays, adaptability is the beau ideal. As George Gilder observes, money has been transformed from a measuring rod into a magic wand. Anyway, the Hamiltons or Lincolns or Grants in your wallet owe their value to the government’s fiat, not to its gold.
Mr. Ledbetter’s book is a chronicle of the American people’s fascination with gold. He is mystified and bemused by it. He rolls his eyes at the gold rushes and the gold-centered orthodoxies of yesteryear. Whatever were our forebears thinking?
His well-spun narrative spans the better part of four centuries. He takes us from gold mining in North Carolina during the administration of John Adams to the Founders’ monetary protocols, which defined the dollar as a weight of gold or silver; from the California Gold Rush to the late-19th-century politics of inflation, featuring William Jennings Bryan and his unsuccessful campaign to inflate the gold dollar by substituting abundant silver; from the formation of the Federal Reserve in 1913—the dollar was still as good as gold—to the shockingly improvisational dollar policies of the New Deal. One fine day, Mr. Ledbetter relates, FDR raised the gold price by 21 cents because it seemed to the president that three times seven was a lucky number.
Next comes the patchwork gold regime of the 1950s and 1960s, the system known by the place at which it was conceived, Bretton Woods (N.H.). No more was gold the gyroscope, or flywheel, of the international monetary system, as Lewis E. Lehrman has written. Now the metal sat inert in vaults. Central banks might demand the right to convert their dollars into gold, and vice versa, but few exercised the option.
Mr. Ledbetter breaks some historical news by uncovering the existence of Operation Goldfinger, a secret government project in the time of Lyndon Johnson to extract gold from “seawater, meteorites, even plants.” By the late 1960s, America’s foreign liabilities were growing much faster than the gold available to satisfy them. For better or worse, the run on finite American gold continued, and Nixon cut the cord.
On, now, to the great inflation of the 1970s, along with the rise of the goldbugs, the cranks (Mr. Ledbetter’s interpretation) or visionaries (as others might style them) who predicted the collapse of the dollar and the rise of double-digit inflation in the Jimmy Carter years. In the mid-1970s, as Mr. Ledbetter recounts, the long fight to restore the right of American citizens to own gold—a right that FDR’s administration had extinguished in 1933—was finally won. The author concludes his story with a survey of the contemporary rear-guard movement to expose the failings of today’s monetary nostrums and reinstitute a gold dollar.
As if to clinch the case against gold—and, necessarily, the case for the modern-day status quo—Mr. Ledbetter writes: “Of forty economists teaching at America’s most prestigious universities—including many who’ve advised or worked in Republican administrations—exactly zero responded favorably to a gold-standard question asked in 2012.” Perhaps so, but “zero” or thereabouts likewise describes the number of established economists who in 2005, ’06 and ’07 anticipated the coming of the biggest financial event of their professional lives. The economists mean no harm. But if, in unison, they arrive at the conclusion that tomorrow is Monday, a prudent person would check the calendar.
Mr. Ledbetter makes a great deal of today’s gold-standard advocates, more, I think, than those lonely idealists would claim for themselves (or ourselves, as I am one of them). The price of gold peaked as long ago as 2011 (at $1,900, versus $1,250 today), while so-called crypto-currencies like bitcoin have emerged as the favorite alternative to government-issued money. It’s not so obvious that, as Mr. Ledbetter puts it, “we cannot get enough of the metal.” On the contrary, to judge by ultra-low interest rates and sky-high stock prices, we cannot—for now—get enough of our celebrity central bankers.
What was the gold standard, exactly—this thing that the professors dismiss so airily today? A self-respecting member of the community of gold-standard nations defined its money as a weight of bullion. It allowed gold to enter and leave the country freely. It exchanged bank notes to gold, and vice versa, at a fixed and inviolable rate. The people, not the authorities, decided which form of money was best.
The gold standard was a hard task master, all right. You couldn’t devalue your way out of trouble. You couldn’t run up a big domestic budget deficit. The central bank of a gold-standard country (if there was a central bank) was charged with preserving the convertibility of the currency and, in a pinch, serving as lender of last resort to needy commercial banks. Growth, employment and price stability took their own course. And if, in a financial panic or a business-cycle downturn, gold fled the country, it was the duty of the central bank to establish a rate of interest that called the metal home. In the throes of a crisis, interest rates would likely go up, not down.
The modern sensibility quakes at the rigor of such a system. Our forebears embraced it. Countries observed the gold standard because it was progressive, effective, civilized. It anchored prices over the long term (with many a bump in the short term). It promoted balance in international accounts and discipline in domestic ones. Great thinkers— Adam Smith, David Ricardo and, yes, John Maynard Keynes himself in the wake of World War I—extolled it.
The chronic problem in gold-standard days was the one that continues to bedevil us moderns: how to maintain a stable currency when lenders and borrowers run amok. President James Buchanan, Lincoln’s immediate predecessor, addressed the question in his first State of the Union address in the wake of the Panic of 1857. The story of American finance, he contended, was the story of paper credit subverting sound money: “At successive intervals the best and most enterprising men have been tempted to their ruin by excessive bank loans of mere paper credit.” A not-so-distinguished president, Buchanan made the monetary point that Mr. Ledbetter skirts: Excessive lending and borrowing subverts the stability of money. It’s the cause of panics under monetary systems both metallic and paper. Which is to say that we earthlings will never achieve financial perfection. It seems that the trouble (or, at least, one trouble) with money is credit and that the trouble with credit is people.
The gold standard, perhaps above all, was a political institution. It flourished in the age of classical liberalism. It was the financial counterpart to the philosophy of limited government. The Ph.D. standard is likewise a political institution. It is the financial counterpart to the philosophy of statism. The policy that some banks are too big to fail—that they must be treated almost as wards of the state to prevent their failure—is a hallmark of the modern age. The policy—indeed, the law—that the stockholders of a bank are themselves responsible for the solvency of the institution in which they hold a fractional interest was a hallmark of the gold-standard era.
Mr. Ledbetter is on a mission to set the historical record straight and head off an unprogressive movement away from paper money. He writes: “To avoid gold’s false paths, we need to argue with the past, to test the assumptions that are too often and too casually passed uncritically.”
I expect that before very long we will be arguing with our immediate past—demanding to know why the public debt has doubled since 2007, second-guessing our collective belief in the mazy doctrines of “quantitative easing” and “forward guidance,” and tuning in to watch congressional hearings into the causes of some future stock-market crash. Mr. Ledbetter has told some good stories. He hasn’t made his case.
—Mr. Grant is the editor of Grant’s Interest Rate Observer.
“History repeats – the argument for abandoning prevailing valuation methods regularly emerges late in a bull market, and typically survives until about the second down-leg (or sufficiently hard first leg) of a bear. Such arguments have included the ‘investment company’ and ‘stock scarcity’ arguments in the late 20’s, the ‘technology’ and ‘conglomerate’ arguments in the late 60’s, the nifty-fifty ‘good stocks always go up’ argument in the early 70’s, the ‘globalization’ and ‘leveraged buyout’ arguments in 1987 (and curiously, again today), and the ‘tech revolution’ and ‘knowledge-based economy’ arguments in the late 1990’s. Speculative investors regularly create ‘new era’ arguments and valuation metrics to justify their speculation.”
– John P. Hussman, Ph.D., New Economy or Unfinished Cycle?, June 18, 2007. The S&P 500 would peak just 2% higher in October of that year, followed by a collapse of more than -55%.
“Old ways of valuing stocks are outdated. A technological revolution has created opportunities for continued low inflation, expanding profits and rising productivity. Thanks to these factors, the United States may be able to enjoy an extended period of expanding stock prices. Jumping out now would leave you poorer than you might become if you have some faith.”
– Los Angeles Times, May 11, 1999. While it’s tempting to counter that the S&P 500 would rise by more than 12% to its peak 10 months later, it’s easily forgotten that the entire gain was wiped out in the 3 weeks that followed, moving on to a 50% loss for the S&P 500 and an 83% loss for the tech-heavy Nasdaq 100..
“Stock prices returned to record levels yesterday, building on the rally that began in late trading on Wednesday… ‘It’s all real buying’ [said the head of index futures at Shearson Lehman Brothers], ‘The excitement here is unbelievable. It’s steaming.’ The continuing surge in American stock prices has produced a spate of theories. [The] chief economist of Kemper Financial Services Inc. in Chicago argued in a report that, contrary to common opinion, American equities may not be significantly overpriced. For one thing, [he] said, ‘The market may be discounting a far-larger rise in future corporate earnings than most investors realize is possible, [and foreign investment] may be altering the traditional valuation parameters used to determine share-price multiples.’ He added, ‘It is quite possible that we have entered a new era for share price evaluation.’”
– The New York Times, August 21, 1987 (the S&P advanced by less than 1% over the next 3 sessions, and then crashed)
“The failure of the general market to decline during the past year despite its obvious vulnerability, as well as the emergence of new investment characteristics, has caused investors to believe that the U.S. has entered a new investment era to which the old guidelines no longer apply. Many have now come to believe that market risk is no longer a realistic consideration, while the risk of being underinvested or in cash and missing opportunities exceeds any other.”
– Barron’s Magazine, February 3, 1969. The bear market that had already quietly started in late-1968 would take stocks down by more than one-third over the next 18 months, and the S&P 500 Index would stand below its 1968 peak even 14 years later.
“The ‘new-era’ doctrine – that ‘good’ stocks (or ‘blue chips’) were sound investments regardless of how high the price paid for them — was at bottom only a means for rationalizing under the title of ‘investment’ the well-nigh universal capitulation to the gambling fever.”
– Benjamin Graham & David Dodd, Security Analysis, 1934, following the 1929-1932 collapse
“The recent collapse is the climax, but not the end, of an exceptionally long, extensive and violent period of inflation in security prices and national, even world-wide, speculative fever. This is the longest period of practically uninterrupted rise in security prices in our history… The psychological illusion upon which it is based, though not essentially new, has been stronger and more widespread than has ever been the case in this country in the past. This illusion is summed up in the phrase ‘the new era.’ The phrase itself is not new. Every period of speculation rediscovers it.”
– Business Week, November 1929. The market collapse would ultimately exceed -80%.
This time is not different
Mark Twain once said “History doesn’t repeat itself, but it does rhyme.” Unfortunately, the failure of history to precisely replicate itself, in every detail, is at the heart of the failure of humanity to learn from it. Each separate instance has its own wrinkles, and within those wrinkles is wrapped the delusion that “this time” is cut from wholly different cloth.
In the financial markets, the unique features of “this time” entice investors away from systematic analysis and durable relationships. It eventually becomes enough simply to refer to buzz-words – “investment companies,” “technology,” “globalization,” “conglomerates,” “dot-com,” “leveraged buy-outs,” “quantitative easing” – as a substitute for data and clear-sighted analysis. If doctors behaved like investors do, every time a new strain of virus emerged, they would declare a “new era,” immediately disregarding every known principle of the immune system until everybody was dead.
This is emphatically not to say that new changes in the economic or financial environment should be disregarded. The argument is exactly the opposite. When faced with these changes, our obligation as careful investors is to obtain data and quantify, as well as possible, how they reasonably impact the objects we care about, such as long-term cash flows, valuations, and prospective returns. We can’t just take some buzz-word and bandy verbal arguments about, without any analysis at all.
We should never abandon our central principles just because we see a squirrel outside the window. If we think the squirrel is important, we have to obtain data and carefully study and quantify how squirrels actually affect the things we care about. We don’t just say, “Ooh, the squirrel has created a new era, so we’re going to ignore everything that history has taught us.”
So if we care about how interest rates might impact valuations, we shouldn’t just switch our brains off and talk about low interest rates. We should quantify their effect (see, for example, The Most Broadly Overvalued Moment in Market History). If we think changes in economic policy might affect economic growth, we shouldn’t just toss around figures that pop out of our imagination. We should estimate the potential range of outcomes, given the conditions that systematically determine GDP growth (see Economic Fancies and Basic Arithmetic). If we think the international activity of U.S. corporations has changed traditional valuation relationships, we should explicitly quantify that impact (see The New Era is an Old Story). If we wonder why valuations and subsequent market returns are systematically related even though interest rates aren’t constant, we should work out the arithmetic and examine the historical relationships (see Rarefied Air: Valuations and Subsequent Market Returns). If we wonder whether valuation measures should be corrected for the level of profit margins embedded in those measures, we should collect the data and evaluate the question (see Margins, Multiples, and the Iron Law of Valuation). And if we think that we could be living in a new era of permanently high profit margins, we might want to quantify the evidence before adopting that view (see below).
Every episode in history has its own wrinkles. But investors should not use some “new era” argument to dismiss the central principles of investing, as a substitute for carefully quantifying the impact of those wrinkles. Unfortunately, because investors get caught up in concepts, they come to a point in every speculative episode where they ignore the central principles of investing altogether. The allure of those wrinkles is what leads investors to forget the lessons of history, and repeat its tragedies again and again.
This time is not different, because this time is always different.
Throwing in the towel
When a boxer is taking a beating, to avoid further punishment, a towel is sometimes thrown from the corner as a token of defeat. Yet even after the towel is thrown, a judicious referee has the right to toss the towel back into the corner and allow the fight to continue.
For decades, Jeremy Grantham, a value investor whom I respect tremendously, has championed the idea, recognized by legendary value investors like Ben Graham, that current profits are a poor measure of long-term cash flows, and that it is essential to adjust earnings-based valuation measures for the position of profit margins relative to their norms. In Grantham’s words, “Profit margins are probably the most mean-reverting series in finance, and if profit margins do not mean-revert, then something has gone badly wrong with capitalism.”
He learned this lesson early on, during the collapse that followed the go-go years of the late-1960’s. Grantham once described his epiphany: “I got wiped out personally in 1968, which was the last really crazy, silly stock market before the Internet era… I became a great reader of history books. I was shocked and horrified to discover that I had just learned a lesson that was freely available all the way back to the South Sea Bubble.”
In recent weeks, Grantham has essentially thrown in the towel, suggesting “this time is decently different”:
“Stock prices are held up by abnormal profit margins, which in turn are produced mainly by lower real rates, the benefits of which are not competed away because of increased monopoly power… In conclusion, there are two important things to carry in your mind: First, the market now and in the past acts as if it believes the current higher levels of profitability are permanent; and second, a regular bear market of 15% to 20% can always occur for any one of many reasons. What I am interested in here is quite different: a more or less permanent move back to, or at least close to, the pre-1997 trends of profitability, interest rates, and pricing. And for that it seems likely that we will have a longer wait than any value manager would like (including me).”
I’ve received a flurry of requests for my views on Grantham’s shift.
My simple response is to very respectfully toss Grantham’s towel back into the corner.
First, Grantham argues that much of the benefit to margins is driven by lower real interest rates. The problem here is two-fold. One is that the relationship between real interest rates and corporate profit margins is extremely tenuous in market cycles across history. Second, the fact is that debt of U.S. corporations as a ratio to revenues is more than double its historical median, leaving total interest costs, relative to corporate revenues, no lower than the post-war norm.
With regard to real interest rates, there’s always a question of how one adjusts for inflation, which can involve trailing rates or inflation rates implied by inflation-protected securities. Because inflation has such a strong serial correlation over time, the distinction matters less than one might think, but it typically helps to use a 2-year trailing rate rather than just the past 12 months. The chart below shows the historical relationship between corporate profit margins and real Treasury yields on that basis. Real interest rates are shown in blue on an inverted (left) scale, with profit margins shown in red on the right scale.
About the only segment worth mentioning is a short span during the rapid disinflation of the early 1980’s. Both Treasury yields and wage inflation fell slower than general prices, so real interest rates, real wage rates, and the U.S. dollar all shot higher (leading to the 1985 Plaza Accord). Aside from that, there’s not much to see here, with no systematic fluctuation between real interest rates and profit margins across economic cycles.
I would argue that what’s really going on with profit margins is quite different than what Grantham suggests. As usual, my views reflect the data. Specifically, the elevation of profit margins in recent years has been a nearly precise reflection of declining labor compensation as a share of output prices. To illustrate this below, I’ve shown real unit labor costs (labor compensation per unit of output, divided by price per unit of output) in blue on an inverted left scale, with profit margins in red on the right scale. Real unit labor costs are de-trended, reflecting the fact that real wage growth has historically lagged productivity growth by about 0.4% annually. Since unit labor costs and the GDP deflator are indexed differently, the left scale values are meaningful on a relative basis, but shouldn’t be interpreted as actual fractions.
What’s notable here is that the process of profit margin normalization is already underway. Though there will certainly be cyclical fluctuations, this process is likely to continue in an environment where the unemployment rate is now down to 4.4% and demographic constraints are likely to result in labor force growth averaging just 0.3% annually between now and 2024. Total employment will grow at the same rate only if the unemployment rate remains at current levels. That creates a dilemma for profit margins: if economic growth strengthens in a tightening labor market, labor costs are likely to comprise an increasing share of output value, suppressing profit margins. If economic growth weakens, productivity is likely to slow, raising unit labor costs by contracting the denominator. [As a side-note, this analysis links up with the Kalecki profits equation because a depressed wage share is typically associated with weak household savings and high government transfer payments].
It’s tempting to imagine that offshoring labor would allow a sustained below-trend retreat in real unit labor costs. But while foreign labor can be cheaper, the corresponding productivity is also often lower, so the impact on unit labor costs is more nuanced than one might think.
So again, with great respect, my response is to encourage Grantham to pick up his towel and lace up his gloves. Even if profit margins have moved forever higher, they are unlikely to have moved to the point where recent highs are the new average. Given that valuations fluctuate around norms that reflect those average margins, valuations are now so obscenely elevated that even an outcome that fluctuates modestly about some new, higher average would easily take the S&P 500 35-40% lower over the completion of the current market cycle.
Let’s also be careful to distinguish the level of valuations from the mapping between valuations and subsequent returns. As evidenced below, there’s utterly no evidence that the link between historically reliable valuation measures and actual subsequent market returns has deteriorated in any way during recent cycles. So even if one wishes to assume that future valuation norms will be higher than “old” historical norms, it follows that one should also assume that future market returns will be lower than “old” historical norms. It has taken the third financial bubble in 17 years to bring the total return of the S&P 500 to 4.7% annually since the 2000 peak. Don’t imagine that future returns will be much better from current valuations, even if future valuations maintain current levels forever. Indeed, my actual expectation is that the completion of the current market cycle will wipe out the entire total return of the S&P 500 since 2000.
Back to the Iron Laws
If we are careful about history, evidence, and market analysis, we repeatedly find that the central principles of investing are captured by a few iron laws. Two of them are particularly important in our discipline.
The first of these is what I call the Iron Law of Valuation: Long-term market returns are driven primarily by valuations. Every valuation ratio is just shorthand for a careful discounted cash-flow approach, so the denominator of any valuation ratio had better be a “sufficient statistic” for the likely stream of cash flows that will be delivered to investors over decades. For market-based measures, revenues are substantially more reliable than current earnings or next year’s estimated earnings.
A second principle is what I call the Iron Law of Speculation: While valuations determine long-term and full-cycle market outcomes, investor preferences toward speculation, as evidenced by uniformity of market action across a broad range of internals, can allow the market to continue higher over shorter segments of the cycle, despite extreme overvaluation. At rich valuations, one had better monitor those internals explicitly, because deterioration opens the way to collapse.
As usual, it’s worth briefly recalling not only the success of our discipline in previous complete market cycles, but also the elephant that I let into the room in 2009. Whatever criticism one might direct toward my shortcomings in the half-cycle since 2009, the fact is this. Our challenges arose from my 2009 insistence on stress-testing our methods against Depression-era data, which inadvertently created a specific vulnerability that should be distinguished from our current outlook. Prior to 2009, overvalued, overbought, overbullish conditions were associated with average market returns below risk-free T-bill returns regardless of whether market internals were favorable or not. But the novelty of the Federal Reserve’s deranged monetary policy response encouraged yield-seeking speculation by investors well after historically reliable “overvalued, overbought, overbullish” conditions appeared. In a zero interest rate environment, one had to wait for explicit deterioration in market internals to signal a weakening of investors’ willingness to speculate, before adopting a hard-negative outlook.
Put simply, the methods that emerged from that stress-testing exercise inadequately accounted for the effect of zero interest rate policies on speculation, and that’s why our adaptations have focused primarily on imposing additional requirements related to market internals. In contrast, valuation relationships have remained completely intact in recent cycles, correctly identifying stocks as undervalued in 2009, but wickedly overvalued today. Our current defensiveness is fully consistent with the Iron Laws. With interest rates well off the zero bound and market internals already showing internal dispersion, now is not a time to become complacent, and certainly not a time to throw in the towel.
Hoping for greater fools
We’ll finish with a review of where the most reliable valuation measures stand, showing their relationship with actual subsequent market returns across history, and over recent market cycles. As I’ve observed before, whatever one proposes as being “different” this time had better be something that suddenly changed in the past 5 years and will persist forever. Undoubtedly, Wall Street will think of something, because as Hegel wrote, “We learn from history that we do not learn from history.”
The first chart updates the ratio of nonfinancial market capitalization to corporate gross value-added (including estimated foreign revenues), which we find better correlated with actual subsequent S&P 500 total returns than any other measure we’ve studied over time. Note in particular that no market cycle in history (even recent ones) has failed to take this measure to half of its current level by the completion of the cycle. But if one wishes to rule that possibility out, notice that even the level of 1.35 was observed as recently as 2012, and is about the highest level ever observed in post-war data prior to 1998. A retreat merely to that level over the completion of the current market cycle would involve a loss of one-third of the value of the S&P 500.
We certainly don’t require a breathtaking market loss in order to adopt a constructive or aggressive market outlook, particularly at the point that a material retreat in valuations is joined by early improvement in market action. Yet investors seem to rule out any material retreat at all. In effect, investors are arguing not only that elevated valuations are justified; they are also quietly assuming that market cycles have been abolished. This is a much larger mistake.
The chart below shows MarketCap/GVA on an inverted log scale (left), along with actual subsequent S&P 500 nominal average annual total returns over the following 12-year horizon. Note that the current estimate of near-zero total returns also includes dividends, meaning that we fully expect the S&P 500 Index itself to be lower 12 years from now than it is today.
I can’t emphasize enough that the mapping between reliable valuation measures and subsequent market returns is a calculation that does not require adjustment for the level of interest rates. Rather, one uses interest rates for comparative purposes after the expected return calculation is made (see The Most Broadly Overvalued Moment In Market History to understand this distinction). While investors seem to look at low interest rates as if they are a good thing, what low interest rates really do here is to lock passive investors in conventional portfolios into a situation where they have no way to avoid dismal outcomes in the coming 10-12 years. Unlike even 2000, when 6.2% Treasury bond yields provided a conventional alternative to obscenely overvalued stocks, yields on all conventional assets are uniformly depressed here. We presently estimate that the total return of a passive, conventional portfolio mix of 60% stocks, 30% bonds, and 10% cash will hardly exceed 1% annually over the coming 12-year horizon.
My expectation is that prospects for long-term investors can be improved only to the extent they can tolerate the possibility of missing returns in the short-term in order to retain the ability to invest at lower valuations and higher prospective returns over the full course of the market cycle.
Since investors have been encouraged to use the distance of valuations from their March 2000 bubble peak as a benchmark for prospective returns, it’s worth noting that on the basis of measures that have been most strongly correlated with actual subsequent market returns in market cycles across history, current valuation levels don’t put the “you are here” sign in 1996, or 1997, or even 1998, but rather in late-December 1999. The ebullience of the tech rampage at the time gave the market a level of momentum that it does not have at present, and the price/revenue multiple of the median S&P 500 component is already 50% beyond the 2000 extreme. Still, if one wishes to use the 2000 peak as a valuation bell to ring, it’s worth recognizing how close valuations are to that peak already. Investors should again recall that once the 2000 peak was in, it took just three weeks for the market to plunge more than 11%, bringing valuations below those December 1999 levels. Investors hoping to sell to greater fools had better be able to call a top rather precisely.
With weak estimated 10-12 year returns expected for conventional portfolios, investors should not exclude alternative assets, tactical strategies, hedged equity, or at least a good amount of dry powder from consideration. For more comments on alternative investment strategies, see When Speculators Prosper Through Ignorance. As Grantham mentioned, we do see investors engaging in put-writing strategies as a way to generate “income.” In the illusion that stocks cannot decline steeply, these strategies have become so aggressive that implied option volatilities were recently driven to levels seen in just 1% of history. That’s not a crash warning by itself, but depressed volatility did appear in combination with divergent market internals and overvalued, overbought, overbullish conditions approaching the 1987 and 2007 peaks. In my view, shorting cheap put options in a high-risk environment like this one is like writing cheap auto insurance at the Demolition Derby. In the short-run, these strategies crush the volatility index (VIX), but if (and I expect when) a steep market loss turns those short puts into in-the-money obligations to take stock off of other investors’ hands at a fixed price, they are likely to contribute to an acceleration of selling pressure, just as portfolio insurance did in 1987.
We’d be inclined toward exactly the opposite strategy. Given low option premiums, along with extreme valuations, interest rates above the zero-bound, and market internals showing continued dispersion, patient investors who can tolerate the potential risk of a slow and annoying drip of option decay over a shorter segment of the market cycle could also see asymmetrically large returns from tail-risk hedges as the market cycle is completed. That kind of position always has to be limited to a small percentage of assets, and should be limited mainly to conditions that join unfavorable valuations and internal dispersion. In the face of marginal new highs, it’s also a reasonable concession to be slow about raising strike prices, since the main object of interest is the 50-60% gap from current valuations to historical norms. As a side note, yes, we did see “Hindenburg” on Thursday, so last week’s high was sloppy internally, but we can’t rule-out further short-term upside..
While about 80% of the market gain since the 2009 low occurred by the end of 2014, a sequence of novelties since then (particularly Brexit and the U.S. election) have worked to extend this high-level top-formation. Still, I continue to view this as a period of top-formation. Unlike Irving Fisher in September 1929, I have no expectation that the market has reached a “permanently high plateau.”
Contemporary criticisms of central banks echo debates from times past
TWENTY years ago next month, the British government gave the Bank of England the freedom to set interest rates. That decision was part of a trend that made central bankers the most powerful financial actors on the planet, not only setting rates but also buying trillions of dollars’ worth of assets, targeting exchange rates and managing the economic cycle.
Although central banks have great independence now, the tide could turn again. Central bankers across the world have been criticised for overstepping their brief, having opined about broader issues (the Reserve Bank of India’s Raghuram Rajan on religious tolerance, the Bank of England’s Mark Carney on climate change). In some countries the fundamentals of monetary policy are under attack: Recep Tayyip Erdogan, the president of Turkey, has berated his central bank because of his belief that higher interest rates cause inflation. And central banks have been widely slated for propping up the financial sector, and denting savers’ incomes, in the wake of the financial crisis of 2007-08.
Such debate is almost as old as central banking itself. Over more than 300 years, the power of central banks has ebbed and flowed as governments have by turns enhanced and restricted their responsibilities in response to economic necessity and intellectual fashion. Governments have asked central banks to pursue several goals at once: stabilising currencies; fighting inflation; safeguarding the financial system; co-ordinating policy with other countries; and reviving economies.
These goals are complex and not always complementary; it makes sense to put experts in charge. That said, the actions needed to attain them have political consequences, dragging central banks into the democratic debate. In the early decades after American independence, two central banks were founded and folded before the Federal Reserve was established in 1913. Central banks’ part in the Depression of the 1930s, the inflationary era of the 1960s and 1970s and the credit bubble in the early 2000s all came under attack.
Bankers to the government
The first central banks were created to enhance the financial power of governments. The pioneer was the Sveriges Riksbank, set up as a tool of Swedish financial management in 1668 (the celebration of its tercentenary included the creation of the Nobel prize in economics). But the template was set by the Bank of England, established in 1694 by William III, ruler of both Britain and the Netherlands, in the midst of a war against France. In return for a loan to the crown, the bank gained the right to issue banknotes. Monarchs had always been prone to default—and had the power to prevent creditors from enforcing their rights. But William depended on the support of Parliament, which reflected the interests of those who financed the central bank. The creation of the bank reassured creditors and made it easier and cheaper for the government to borrow.
No one at the time expected these central banks to evolve into the all-powerful institutions of today. But a hint of what was to come lay in the infamous schemes of John Law in France from 1716 to 1720. He persuaded the regent (the king, Louis XV, was an infant) to allow him to establish a national bank, and to decree that all taxes and revenues be paid in its notes. The idea was to relieve the pressure on the indebted monarchy. The bank then assumed the national debt; investors were persuaded to swap the bonds for shares in the Mississippi company, which would exploit France’s American possessions.
One of the earliest speculative manias ensued: the word “millionaire” was coined as the Mississippi shares soared in price. But there were no profits to be had from the colonies and when Law’s schemes collapsed, French citizens developed an enduring suspicion of high finance and paper money. Despite this failure, Law was on to something.
Paper money was a more useful medium of exchange than gold or silver, particularly for large amounts. Private banks might issue notes but they were less trustworthy than those printed by a national bank, backed by a government with tax-raising powers. Because paper money was a handier medium of exchange, people had more chance to trade; and as economic activity grew, government finances improved. Governments also noticed that issuing money for more than its intrinsic value was a nice little earner.
Alexander Hamilton, America’s first treasury secretary, admired Britain’s financial system. Finances were chaotic in the aftermath of independence: America’s first currency, the Continental, was afflicted by hyperinflation. Hamilton believed that a reformed financial structure, including a central bank, would create a stable currency and a lower cost of debt, making it easier for the economy to flourish.
His opponents argued that the bank would be too powerful and would act on behalf of northern creditors. In “Hamilton”, a hit hip-hop musical, the Thomas Jefferson character declares: “But Hamilton forgets/His plan would have the government assume state’s debts/Now, place your bets as to who that benefits/The very seat of government where Hamilton sits.”
Central banking was one of the great controversies of the new republic’s first half-century. Hamilton’s bank lasted 20 years, until its charter was allowed to lapse in 1811. A second bank was set up in 1816, but it too was resented by many. Andrew Jackson, a populist president, vetoed the renewal of its charter in 1836.
Good as gold
A suspicion that central banks were likely to favour creditors over debtors was not foolish. Britain had moved onto the gold standard, by accident, after the Royal Mint set the value of gold, relative to silver, higher than it was abroad at around the turn of the 18th century, and silver flowed overseas. Since Bank of England notes could be exchanged on demand for gold, the bank was in effect committed to maintaining the value of its notes relative to the metal.
By extension, this meant the bank was committed to the stability of sterling as a currency. In turn, the real value of creditors’ assets (bonds and loans) was maintained; on the other side, borrowers had no prospect of seeing debts inflated away.
Gold convertibility was suspended during the Napoleonic wars: government debt and inflation soared. Parliament restored it in 1819, although only by forcing a period of deflation and recession. For the rest of the century, the bank maintained the gold standard with the result that prices barely budged over the long term. But the corollary was that the bank had to raise interest rates to attract foreign capital whenever its gold reserves started to fall. In effect, this loaded the burden of economic adjustment onto workers, through lower wages or higher unemployment. The order of priorities was hardly a surprise when voting was limited to men of property. It was a fine time to be a rentier.
The 19th century saw the emergence of another responsibility for central banks: managing crises. Capitalism has always been plagued by financial panics in which lenders lose confidence in the creditworthiness of private banks. Trade suffered at these moments as merchants lacked the ability to fund their purchases. In the panic of 1825 the British economy was described as being “within twenty-four hours of a state of barter.” After this crisis, the convention was established that the Bank of England act as “lender of last resort”. Walter Bagehot, an editor of The Economist, defined this doctrine in his book “Lombard Street”, published in 1873: the central bank should lend freely to solvent banks, which could provide collateral, at high rates.
The idea was not universally accepted; a former governor of the Bank of England called it “the most mischievous doctrine ever breathed in the monetary or banking world”. It also involved a potential conflict with a central bank’s other roles. Lending in a crisis meant expanding the money supply. But what if that coincided with a need to restrict the money supply in order to safeguard the currency?
As other countries industrialised in the 19th century, they copied aspects of the British model, including a central bank and the gold standard. That was the pattern in Germany after its unification in 1871.
America was eventually tipped into accepting another central bank by the financial panic of 1907, which was resolved only by the financial acumen of John Pierpont Morgan, the country’s leading banker. It seemed rational to create a lender of last resort that did not depend on one man. Getting a central bank through Congress meant assuaging the old fears of the “eastern money power”. Hence the Fed’s unwieldy structure of regional, privately owned banks and a central, politically appointed board.
Ironically, no sooner had the Fed been created than the global financial structure was shattered by the first world war. Before 1914 central banks had co-operated to keep exchange rates stable. But war placed domestic needs well ahead of any international commitments. No central bank was willing to see gold leave the country and end up in enemy vaults. The Bank of England suspended the right of individuals to convert their notes into bullion; it has never been fully reinstated. In most countries, the war was largely financed by borrowing: central banks resumed their original role as financing arms of governments, and drummed up investor demand for war debt. Monetary expansion and rapid inflation followed.
Reconstructing an international financial system after the war was complicated by the reparations imposed on Germany and by the debts owed to America by the allies. It was hard to co-ordinate policy amid squabbling over repayment schedules. When France and Belgium occupied the Ruhr in 1923 after Germany failed to make payments, the German central bank, the Reichsbank, increased its money-printing, unleashing hyperinflation. Germans have been wary of inflation and central-bank activism ever since.
The mark eventually stabilised and central banks tried to put a version of the gold standard back together. But two things hampered them. First, gold reserves were unevenly distributed, with America and France owning the lion’s share. Britain and Germany, which were less well endowed, were very vulnerable.
Second, European countries had become mass democracies, which made the austere policies needed to stabilise a currency in a crisis harder to push through. The political costs were too great. In Britain the Labour government fell in 1931 when it refused to enact benefit cuts demanded by the Bank of England. Its successor left the gold standard. In Germany Heinrich Brüning, chancellor from 1930 to 1932, slashed spending to deal with the country’s foreign debts but the resulting slump only paved the way for Adolf Hitler.
America was by then the most powerful economy, and the Fed the centrepiece of the interwar financial system (see chart 1). The central bank struggled to balance domestic and international duties. A rate cut in 1927 was designed to make life easier for the Bank of England, which was struggling to hold on to the gold peg it had readopted in 1925. But the cut was criticised for fuelling speculation on Wall Street. The Fed started tightening again in 1928 as the stockmarket kept booming. It may have overdone it.
If central banks struggled to cope in the 1920s, they did even worse in the 1930s. Fixated on exchange rates and inflation, they allowed the money supply to contract sharply. Between 1929 and 1933, 11,000 of America’s 25,000 banks disappeared, taking with them customers’ deposits and a source of lending for farms and firms. The Fed also tightened policy prematurely in 1937, creating another recession.
During the second world war central banks resumed their role from the first: keeping interest rates low and ensuring that governments could borrow to finance military spending. After the war, it became clear that politicians had no desire to see monetary policy tighten again. The result in America was a running battle between presidents and Fed chairmen. Harry Truman pressed William McChesney Martin, who ran the Fed from 1951 to 1970, to keep rates low despite the inflationary consequences of the Korean war. Martin refused. After Truman left office in 1953, he passed Martin in the street and uttered just one word: “Traitor.”
Lyndon Johnson was more forceful. He summoned Martin to his Texas ranch and bellowed: “Boys are dying in Vietnam and Bill Martin doesn’t care.” Typically, Richard Nixon took the bullying furthest, leaking a false story that Arthur Burns, Martin’s successor, was demanding a 50% pay rise. Attacked by the press, Burns retreated from his desire to raise interest rates.
In many other countries, finance ministries played the dominant role in deciding on interest rates, leaving central banks responsible for financial stability and maintaining exchange rates, which were fixed under the Bretton Woods regime. But like the gold standard, the system depended on governments’ willingness to subordinate domestic priorities to the exchange rate. By 1971 Nixon was unwilling to bear this cost and the Bretton Woods system collapsed. Currencies floated, inflation took off and worse still, many countries suffered high unemployment at the same time.
This crisis gave central banks the chance to develop the powers they hold today. Politicians had shown they could not be trusted with monetary discipline: they worried that tightening policy to head off inflation would alienate voters. Milton Friedman, a Chicago economist and Nobel laureate, led an intellectual shift in favour of free markets and controlling the growth of the money supply to keep inflation low. This “monetarist” approach was pursued by Paul Volcker, appointed to head the Fed in 1979. He raised interest rates so steeply that he prompted a recession and doomed Jimmy Carter’s presidential re-election bid in 1980. Farmers protested outside the Fed in Washington, DC; car dealers sent coffins containing the keys of unsold cars. But by the mid-1980s the inflationary spiral seemed to have been broken.
The rise to power
In the wake of Mr Volcker’s success, other countries moved towards making central banks more independent, starting with New Zealand in 1989. Britain and Japan followed suit. The European Central Bank (ECB) was independent from its birth in the 1990s, following the example of Germany’s Bundesbank. Many central bankers were asked to target inflation, and left to get on with the job. For a long while, this approach seemed to work perfectly. The period of low inflation and stable economies in the 1990s and early 2000s were known as the “Great Moderation”. Alan Greenspan, Mr Volcker’s successor, was dubbed the “maestro”. Rather than bully him, presidents sought his approbation for their policies.
Nevertheless, the seeds were being sown for today’s attacks on central banks. In the early 1980s financial markets began a long bull run as inflation fell. When markets wobbled, as they did on “Black Monday” in October 1987, the Fed was quick to slash rates. It was trying to avoid the mistakes of the 1930s, when it had been too slow to respond to financial distress. But over time the markets seemed to rely on the Fed stepping in to rescue them—a bet nicknamed the “Greenspan put”, after an option strategy that protects investors from losses. Critics said that central bankers were encouraging speculation.
However, there was no sign that the rapid rise in asset prices was having an effect on consumer inflation. Raising interest rates to deter stockmarket speculation might inflict damage on the wider economy. And although central banks were supposed to ensure overall financial stability, supervision of individual banks was not always in their hands: the Fed shared responsibility with an alphabet soup of other agencies, for example.
When the credit bubble finally burst in 2007 and 2008, central banks were forced to take extraordinary measures: pushing rates down to zero (or even below) and creating money to buy bonds and crush long-term yields (quantitative easing, or QE: see chart 2). As governments tightened fiscal policy from 2010 onwards, it sometimes seemed that central banks were left to revive the global economy alone.
Their response to the crisis has called forth old criticisms. In an echo of Jefferson and Jackson, QE has been attacked for bailing out the banks rather than the heartland economy, for favouring Wall Street rather than Main Street. Some Republicans want the Fed to make policy by following set rules: they deem QE a form of printing money. The ECB has been criticised both for favouring northern European creditors over southern European debtors and for cosseting southern spendthrifts.
And central banks are still left struggling to cope with their many responsibilities. As watchdogs of financial stability, they want banks to have more capital. As guardians of the economy, many would like to see more lending. The two roles are not always easily reconciled.
Perhaps the most cutting criticism they face is that, despite their technocratic expertise, central banks have been repeatedly surprised. They failed to anticipate the collapse of 2007-08 or the euro zone’s debt crisis. The Bank of England’s forecasts of the economic impact of Brexit have so far been wrong. It is hard to justify handing power to unelected technocrats if they fall down on the job.
All of which leaves the future of central banks uncertain. The independence granted them by politicians is not guaranteed. Politicians rely on them in a crisis; when economies recover they chafe at the constraints central banks impose. If history teaches anything, it is that central banks cannot take their powers for granted.