Category Archives: Financial crisis

Harvard Academic Sees Debt Rout Worse Than 1994 ‘Bond Massacre’ Bloomberg, by Anchalee Worrachate

If you thought you had already read the gloomiest possible prognosis for bonds, wait until you read this one.

Paul Schmelzing, a PhD candidate at Harvard University and a visiting scholar at the Bank of England, said if the latest bond market bubble bursts, it will be worse than in 1994 when global government bonds suffered the biggest annual loss on record.

“Looking back over eight centuries of data, I find that the 2016 bull market was indeed one of the largest ever recorded,” wrote Schmelzing in an article posted on Bank Underground, which is a blog run by Bank of England staff. “History suggests this reversal will be driven by inflation fundamentals, and leave investors worse off than the 1994 ‘bond massacre’”.

Schmelzing, whose research focuses on the history of international financial systems, divided modern-day bond bear markets into three major types: inflation reversal of 1967-1971, the sharp reversal of 1994, and the value at risk shock in Japan in 2003.

The Bank of America Merrill Lynch Global Government Index of bonds fell 3.1 percent in its worst-ever annual loss in 1994 as then-Fed Chairman Alan Greenspan surprised investors by almost doubling the benchmark rate. Treasury 10-year yields surged from 5.6 percent in January to 8 percent in November.

 

Bank of England

The current bond market is facing the “perfect storm” of potential steepening of the bond yield curve, monetary policy tightening, and a multi-year period of sustained losses due to a “structural” return of inflation resembling that of 1967, he said. Last quarter was the worst for government bonds since 1987, according to data compiled by Bloomberg.

Global inflation expectations, as measured by the yield difference between nominal and index-linked bonds, have risen to the highest since May 2015 after falling to a record low in February last year.

“By historical standards, this implies sustained double-digit losses on bond holdings, subpar growth in developed markets, and balance sheet risks for banking systems with a large home bias,” Schmelzing said.

Advertisements

Los señores de las finanzas

El dinero sin medida forma los nervios de la guerra.”

Cicerón, Filípicas

El siglo XX dejó dos grandes episodios que marcarán de manera definitoria la historia económica y monetaria desde entonces: me refiero a la hiperinflación alemana (1919-1923) y la Gran Depresión de 1929 que se extenderá durante toda la década siguiente hasta practicamente la llegada de la Segunda Guerra Mundial, el segundo suicidio colectivo de Europa en el lapso de tan solo una generación. Ambos episodios no únicamente serán relevantes por la magnitud de los hechos, sino por el fuerte eco que dejarán y que llegará, aún vibrante, a nuestros días con gran influencia sobre la sabiduría convencional sobre el origen y remedio de las crisis económicas. Bécker escribió que el recuerdo que deja un libro es más importante que el libro en sí. Con las crisis económicas pasa un poco lo mismo: tan importante es la crisis, como el poso que deja tras de sí; sobretodo en el ideario colectivo ya que marcará de forma definitoria la manera de afrontar futuros episodios de crisis y pánico financiero.

Con esta potente idea en mente -la importancia de la historia y de sus consecuencias como elementos de valor para entender el presente- se encuadra Los señores de las finanzas (@EdicionesDeusto, 2010) del economista Liaquat Ahamed. El libro repasa los intensos acontecimientos que tendrán lugar desde finales de la Primera Guerra Mundial hasta la caída en picado de Wall Street en octubre de 1929 y, aún más importante, las consecuencias del mismo durante el complejo periodo de 1929-33 y la posterior década de depresión de 1933 hasta 1944. La gran virtud de la obra, y el elemento principal que justifica su premio Pulitzer y su lectura, es que la historia se explica de forma pormenorizada, con una prosa que consigue atraparte desde el primer momento y a través de sus principales protagonistas. El autor acierta doblemente al poner el énfasis, como advertía Disraeli con respecto al estudio de la historia, en los protagonistas más que en los hechos; y segundo, por los propios personajes escogidos: Montagu Norman (Banco de Inglatera), Benjamin Strong (Reserva Federal de Nueva York, luego substituído por George Harrison), Hjalmar Schacht (Reichsbank), y Émile Moreau (Banco de Francia); es decir, los principales banqueros centrales de entonces.

La economía esta, sobretodo, determinada por la solidez de las instituciones monetarias; por tanto, su correcta comprensión resulta imprescindible. Como dijo célebremente Mayer .A. Rothschild, fundador del banco con el mismo nombre: “Dejadme emitir y controlar el dinero de una nación y no importará quién dicte las leyes.”

A parte del interés evidente que toma el relato que incluye un completo perfil de los personajes -no únicamente de los principales, sino también de los secundarios- y que esta lleno de anécdotas y vivencias personales de los mismos, sino que esta manera de estructurar la obra permite entender los hechos a partir del carácter y pensamiento de las personas encargadas de tomar las decisiones que determinaran esos hechos. En este sentido, la ambición de la obra no puede ser mayor. La obra, como decíamos, se centra en la importante institución del dinero, aspecto central de la economía y que, sin embargo (y muchas veces inexplicablemente), su debate parece estar limitado a unos pocos círculos de expertos quedando fuera de plano para el gran público. La obra repasa la historia monetaria del mundo en uno de sus periodos más intensos: fin del patrón oro clásico tras el pánico de 1907 en Nueva York y creación de la Reserva Federal (1913) y hasta el periodo de depresión global de 1933-1944 y que terminará de forma trágica con la Segunda Guerra Mundial. El autor da múltiples pinceladas sobre el funcionamiento de la banca y su relación con el proceso de rápida industrialización y globalización que tendrá lugar en la recta final del siglo XIX y 1913, así como el desarrollo industrial y social de principios del siglo XX todo, como decíamos, acercándonos la historia a través de los ojos de sus protagonistas.

Desde el punto de vista de la teoría es donde, a mi juicio, el libro es resulta más flojo. De entrada, huelga decir que la gran depresión es quizás uno de los episodios cuya interpretación genera más divergencias entre economistas e historiadores (Niall Ferguson (@nfergus) hace esta misma observación en su magnifica obra Kissinger: 1923-1968: The Idealist), así que las discrepancias sobre estos temas son, hasta cierto punto esperables. La vulgata general con respecto a la Gran Depresión es que fue causada en gran medida por la arquitectura del sistema monetario (en aquel momento patrón oro-dólar, aunque muchos tienden a confundir este modelo con el patrón oro clásico que existirá hasta antes de la creación de la Reserva Federal en 1913), y que su posterior alargamiento fue debido al enfoque ‘laissez faire‘ adoptado por el (por otro lado, bastante nefasto) presidente (republicano) Herbert Hoover. Únicamente con la llegada del activismo gubernamental de Roosevelt, apoyado sobre el cuerpo teórico de pensadores como Keynes (otro de los grandes personajes secundarios de la historia) permitirá al conjunto de las economías escapar de la depresión. El otro gran chivo expiatorio será el patrón oro, el gran culpable, sobretodo para los banqueros centrales, ya entonces y también ahora, convertido en verdadero anatema desde entonces. Lo cierto es que, la evidencia de los datos, junto a una correcta comprensión de como funciona el mecanismo monetario y la institución del dinero, permite comprobar que nada más alejado de la realidad.

Hay un antecedente, la creación de la Fed tras el pánico de 1907 en 1913, que el libro no alcanza a entender sus importantes consecuencias. Este hecho permitirá con posterioridad mantener los tipos de interés artificialmente bajos lo que alimentará la burbuja expeculativa (y altamente apalancada) de valores bursátiles en Wall Street. El origen de esta política, que resultará letal, no esta en Estados Unidos sino en Gran Bretaña. El Reino Unido cometerá uno de los grandes errores en la política monetaria de este complejo periodo al hacer volver la libra al patrón oro al mismo tipo de cambio de antes de la Guerra tras la paz de Versalles (1919). Winston Churchill, entonces secretario de Hacienda, fuertemente presionado por Norman, volvió al patrón oro con el antiguo tipo de cambio de antes de la contienda lo que agudizó la deflación en aquel país tras la guerra. En efecto, al fijar un tipo “artificialmente” alto por motivos políticos -Gran Bretaña no quería perder importancia dentro del escenario financiero y geopolítico mundial-, no reconoció en la fijación del tipo de cambio de la libra esterlina con el oro la fuerte perdida de competitividad que había sufrido la economía durante laguerra. Esto deprimió la economía en un momento en donde, además, en ninguna de las grandes economías tenía en marcha una agenda reformista, sino todo lo contrario.

9788423427871

Tras la guerra, las naciones europeas no solo quedaron mutuamente destruídas sino que quedaron altamente endeudadas y con menos de la mitad de las reservas de oro co. las que contaban. Estados Unidos, por el contrario, había hecho crecer enormemente sus reservas hasta el punto que duplicaban la del resto de potencias combinadas. El objetivo tras la guerra, era reanudar la economía (mejorar la competitividad) y reanudar los mecanismos de crédito. Lo primero no se hizo y lo segundo se forzó mediante una política por parte de la Fed de tipos bajos lo que suponía el necesario balón de oxígeno para que la dañada y disfuncional economía inglesa fuera tirando a trancas y barrancas durante aquellos años. Mientrastanto, una economía en plena fase de expansión y fiebre inversora (por primera vez, durante los felices años 20, la inversión en bolsa se popularizara enormemente y, además, parte importantísima de esta inversión se hará de forma apalancada) era distorsionada artificialmente por oleada de crédito barato cuyo resultado únicamente podía ser su irremediable ajuste.

Llegados a este punto, cabe subrayar que el gran elemento que explica la burbuja especulativa en la bolsa -que será especialmente aguda entre finales de 1925 y 1929- será la política de repetidos recortes de tipo de interés. Una vez llegado el ajuste, el activismo gobernamental y de la Fed. La administración Hoover aumentará el gasto público (importante será el aumento de partidas presupuestarias para la inversión en infraestructuras, como la propia presa Hoover), el control de precios y salarios y subirá la presión fiscal (intervencionismo económico que llegará al paroxismo con Roosevelt). Por su parte la Fed, contraerá la base monetaria entre 1929 y 1933 (cosa inédita en la historia como remarcarán Milton Friedman y Anna Schwartz en su gran obra A Monetary History of the United States) lo que complicará innecesariamente las cosas. En suma, un conjunto de errores de política económica derivados del prisma intervencionista (posteriormente keynesianimos) sin los que es imposible entender ni la gestación de la crisis ni su inusitada dureza y duración. Lo resume magistralmente Lorenzo Bernaldo de Quirós (@BernaldoDQuiros) en el fantástico libro ¿Estado o Mercado? (Deusto, 2010).

Ni en aquel momento, ni tampoco ahora, el grueso de la sabiduría convencional acertó en dibujar las alternativas ni separar lo que son “fallos” en el diseño del sistema monetario y lo que son fallos de política. En aquel momento se enfrentaron dos posturas: los favorables a la ortodoxia del patrón oro y los favorables -entre los más destacados Keynes o Roosevelt– de abandonar el sistema para poder emitir crédito sin reestricción. Lo cierto es que, como siempre, la virtud estaba “somewhere in the middle“: ni la naturaleza de la crisis se solucionaba inundando los mercados de liquidez (para muestra un botón con las actuales políticas de extraordinaria liquidez de la Fed entre 2009 y 2015 que únicamente han agravado la fragilidad de la situación); y, al mismo tiempo, fue un error fijar un tipo de cambio con el oro que no se ajustaba a la realidad de los países lo que llegado el momento distorsiono los flujos del comercio alterando, en consecuencia, también los diferentes flujos de oro entre los diferentes países lo que agudizó la depresión en algunos (i.e. Reino Unido), sobre estimulando la actividad en otros (i.e. Estados Unidos).

Recordaba John Mülller (@cultrun) hace poco en un notable artículo en El Español las palabras de Adous Huxley: “Quizá la más grande lección de la historia es que nadie aprendió las lecciones de la historia.” Amén. Por eso resulta recomendable, al gran trabajo de documentación de Ahamed, complementarla con otras lecturas académicamente más completos (o mejor dicho, no “obsesionados” con la deflación). Y me permito recomendar dos al ya citado libro de Bernaldo de Quirós. El clásico de Murray N. Rothbard, America’s Great Depression (disponible en pdf en este link) que constituye el pilar principal de la explicación a la Gran Depresión fuera de los cánones de la sabiduría convencional (que se puede encontrar en The Great Depression 1929 de John K. Galbraith); y Currency Wars (2011) de James G. Rickards (@JamesGRickards), libro imprescindible, del que tengo pendiente hacer reseña.

Todos los puntos de vista son necesarios para un correcto entendimiento del fenómeno monetario que configura la base en la que se apoya la economía y sin el cual es imposible tampoco tener una visión completa del fenómeno social y político. Buena lectura y feliz 2016.

War, Big Government and Lost Freedom. Dr. Richard B Ebeling

 

We are currently marking the hundredth anniversary of the fighting of the First World War. For four years between the summer of 1914 and November 11, 1918, the major world powers were in mortal combat with each other. The conflict radically changed the world. It overthrew the pre-1914 era of relatively limited government and free market economics, and ushered in a new epoch of big government, planned economies, and massive inflations, the full effects from which the world has still not recovered.

All the leading countries of Europe were drawn into the war. It began when the archduke of Austria- Hungary, Franz Ferdinand, and his wife, Sophia, were assassinated in Bosnia in June 1914. The Austro-Hungarian government claimed that the Bosnian-Serb assassin had the clandestine support of the Serbian government, which the government in Belgrade denied.

How a Terrible War Began and Played Out

Ultimatums and counter-ultimatums soon set in motion a series of European military alliances among the Great Powers. In late July and early August, the now-warring parties issued formal declarations of war. Imperial Germany, the Turkish Empire, and Bulgaria supported Austria-Hungary. Imperial Russia supported Serbia, which soon brought in France and Great Britain because these countries were aligned with the czarist government in St. Petersburg. Italy entered the war in 1915 on the side of the British and the French.

The United States joined the conflict in April 1917, a month after the abdication of the Russian czar and the establishment of a democratic government in Russia. But this first attempt at Russian democracy was overthrown in November 1917, when Vladimir Lenin led a communist coup d’état; Lenin’s revolutionary government then signed a separate peace with Imperial Germany and Austria-Hungary in March 1918, taking Russia out of the war.

The arrival of large numbers of American soldiers in France in the summer of 1918, however, turned the balance of forces against Germany on the Western Front. After having been driven out of the French territory they had occupied since the first year of the war, the Germans agreed to the armistice on November 11, 1918 that ended what was already called the Great War – the “War to End All Wars” as it was falsely believed.

mapa WWi 

The Human and Material Costs of War

The human and material cost of the First World War was immense. During the conflict more than 60 million men were called up to fight. At least 20 million soldiers and civilians lost their lives, with an equal number wounded.

The participating governments combined spent more than $145.9 billion in fighting each other. In 2015 dollars, this represents a monetary expenditure of more than $3.8 trillion. (As a point of comparison, what the belligerent powers spent, in total, fighting each other in the four years of World War I, the U.S government almost spent, alone, in fiscal year 2015 – $3.6 trillion!)

These numbers, of course, do not capture the human suffering from the four years of war. On the Western Front, which ran through northern France from the English Channel to the Swiss border, millions of soldiers lived endless months – years – in frontline trench warfare. They fought in the heat of the summer and the cold of winter, often with the decomposing bodies of their fallen comrades next to them for days on end.

They fought in battles such as the one for the French town of Verdun in which hundreds of thousands of men were killed during human wave attacks in attempts to capture enemy positions. Soldiers were mowed down by machine guns or crushed under the treads of that new machine of war, the tank.

The airplane entered modern warfare for the first time, raining down bombs on both military and civilian targets. And both sides introduced the use of poison mustard gas that blinded the eyes, blistered the lungs, and brought agonizing death.

War and the End of Limited Government Liberalism

The First World War also brought about the end of the (classical) liberal epoch in modern Western civilization. For most of the 100 years before 1914, the Western world had moved in the direction of greater individual freedom and wider economic liberty.

All-powerful kings were replaced with representative democratic government or constitutionally limited monarchy. Expanding civil liberty brought about a more impartial equality before the law and the end of human slavery.

The older eighteenth century mercantilist system of economic planning and control by government was ended. In its place, arose domestic free enterprise and widening global freedom of trade. The standard of living of tens of millions in the West began to dramatically rise above subsistence and starvation for the first time in human history, while at the same time population sizes grew exponentially.

War may not have been abolished in the nineteenth century, but new international “rules of war” meant that they were less frequent, of shorter duration, and when among the Great Powers, at least, often involved fewer deaths and greater respect for civilian life and property.

(The American Civil War in the 1860s was the one major exception with more than 650,000 deaths and massive destruction in the Southern states.)

Wars and armament races, many argued at the time, had become too costly and destructive among “civilized” nations. A universal epoch of international peace was hoped for when the new century dawned in 1900.

But in 1914, the First World War shattered the long liberal peace that had more or less prevailed in Europe since the last world war that ended with the defeat of Napoleon’s France in 1815. But even before 1914, there were emerging anti-liberal forces that were moving the world toward greater government control and a renewal of international conflict. (See my article, “Before Modern Collectivism: The Rise and Fall of Classical Liberalism.”) 

wwi

The Rise of Nationalism and Socialism

Early in the nineteenth century, the ideology of nationalism became a new rallying cry for peoples throughout Europe and increasingly around the world. If liberalism had espoused peaceful market exchange and the freedom of individuals under the rule of law, nationalism called for the forced unification under one government of all peoples speaking the same language or sharing the same culture or ethnicity. National collectivism was considered a higher ideal than respect for the liberty of the individuals comprising communities and nations.

In the middle of the nineteeth century, another form of collectivism started to gain popularity and support: socialism. Karl Marx and other socialists argued that capitalism was the root of all social evil, causing poverty and resulting in exploitation of the masses for the benefit of those who privately owned the means of production. Socialists called for the nationalization of the means of production, central planning of all economic activity, and the curtailing of individual freedom for the sake of the collective good.

War and the Planned Society

Imperialist designs by the Great Powers in conjunction with the new ideological forces of rising nationalism and socialism all came together in the caldron of conflict that enveloped so much of the world after 1914.

Immediately with the outbreak of hostilities, the liberal system of individual liberty, private property, free enterprise, free trade, limited government, low taxes, and sound money was thrown to the wind.

The epoch of political and economic collectivism had begun. Civil liberties were rapidly curtailed in all the belligerent nations, with laws restricting freedom of speech and the press. Opponents of war were silenced with long prison sentences for “anti-patriotic” behavior. Industry and agriculture were soon placed under increasingly strict price and wage controls.

Governments imposed wartime planning boards that directed the economic activities of all. They raised taxes to heights never experienced even under the most plundering hands of absolute monarchs of the past. Governments also ended international free trade, and introduced rigid regulations over all imports and exports.

The nineteenth century freedom of movement under which people in the West could travel from one nation to another without passport or visa was abolished; a new era of immigration and emigration barriers began. The individual was now completely under the control and command of the state.

With this came a new governmental responsibility: direct caring for the economic welfare of the citizenry. German free-market economist Gustav Stolper explained:

“Just as the [First World] War for the first time in history established the principle of universal military service, so for the first time in history it brought economic national life in all its branches and activities to the support and service of state politics – made it effectively subordinate to the state. . . . Not supply and demand, but the dictatorial fiat of the state determined economic relationships – production, consumption, wages, and cost of living   . . .

“At the same time, and for the first time, the state made itself responsible for the physical welfare of its citizens; it guaranteed food and clothing, not only to the army in the field but to the civilian population as well . . .

“Here is a fact pregnant with meaning: the state became for a time the absolute ruler of our economic life, and while subordinating the entire economic organization to its military purposes, also made itself responsible for the welfare of the humblest of its citizens, guaranteeing him a minimum of food, clothing, heating, and housing.”

Gold as Money in the Prewar Liberal World

Along with these losses of personal civil and economic freedom came yet another abridgement of the liberal system of government: the abolition of the gold standard. During the 25 years of war between France and Great Britain following the French Revolution of 1789, both governments had resorted to the money printing press to finance their war expenditures. As a result, inflation had eaten away at the wealth and security of the British and French citizenry.

When those wars ended in 1815, the lesson learned was that governments could not be trusted with direct control over the creation of money. The liberal monetary goal was the reestablishment of the gold standard, so the amount of money in society was independent of political manipulation.

Better to rely upon the market forces of supply and demand and the profitability of gold mining, the classical liberals argued, than the caprice of politicians and special interest groups desiring to print the paper money they wanted to use to plunder the peaceful production of the mass of humanity.

Through the decades of the nineteenth century, first Great Britain and then the rest of the Western nations legally established the gold standard as the basis of their monetary systems. The gold standard was mostly managed by national central banks, and thus not truly free market monetary systems.

But central banks were expected to, and for the most part did, abide by the monetary “rules of the game” of limiting increases (or decreases) in the domestic currency to additions to (or reductions in) the nation’s supply of gold. Sound money for the nineteenth century liberals was gold money.

Paper Money and Inflation Finances the War

But with the firing of the first shots in the summer of 1914, the belligerent governments all ended legal redemption of their currencies for fixed amounts of gold. The citizens in these warring counties were pressured or compelled to hand over to their respective governments the gold in their private hands, in exchange for paper money.

Almost immediately, the monetary printing presses were set to work creating the vast financial means needed to fight an increasingly expensive war.

In 1913, the British money supply amounted to 28.7 billion pounds sterling. But soon, as British economist, Edwin Cannan, expressed it, the country was suffering from a “diarrhea of pounds.” When the war ended in 1918, Great Britain’s money supply had almost doubled to 54.8 billion pounds, and continued to increase for three more years of peacetime until it reached 127.3 billion pounds in 1921, a fivefold increase from its level eight years earlier.

The French money supply had been 5.7 billion francs in 1913. By war’s end in 1918, it had increased to 27.5 billion francs. In this case, a fivefold increase in a mere five years. By 1920, the French money supply stood at 38.2 billion francs. The Italian money supply had been 1.6 billion lire in 1913 and increased to 7.7 billion lire, for a more than fourfold increase, and stood at 14.2 billion lire in 1921.

In addition, these countries took on huge amounts of debt to finance their war efforts. Great Britain had a national debt of 717 million pounds in 1913. At the end of the war that debt had increased to 5.9 billion pounds, and rose to 7.8 billion pounds by 1920.

French national debt increased from 32.9 billion francs before the war to 124 billion francs in 1918 and 240 billion francs in 1920. Italy was no better, with a national debt of 15.1 billion lire in 1913 that rose to 60.2 billion lire in 1918 and climbed to 92.8 billion in 1921.

Though the United States had only participated in the last year and a half of the war, it too created a large increase in its money supply to fund government expenditures that rose from $1.3 billion in 1916 to $15.6 billion in 1918. The U.S. money supply grew 70 percent during this period from $20.7 billion in 1916 to 35.1 billion in 1918.

Twenty-two percent of America’s war costs were covered by taxation, about 25 percent from printing money, and the remainder of 53 percent by borrowing.

hyperinflation

The German Ideology of Power for War

The most severe inflations during World War I occurred in Central and Eastern Europe. Among the worst of these were the one in Germany during and then after the war, with the near total collapse of the German currency in 1922 and 1923.

For decades before the start of the war, German nationalist and imperialist ambitions were directed to military and territorial expansion. A large number of German social scientists known as members of the Historical School had been preaching the heroism of war and the superiority of the German people who deserved to rule over other nationalities in Europe.

Hans Kohn, one of the twentieth century’s leading scholars on the history and meaning of nationalism, explained the thinking of leading figures of the Historical School, who were also known as “the socialists of the chair” in reference to their prominent positions at leading German universities. He wrote:

“The ‘socialists of the chair’ desired a benevolent paternal socialism to strengthen Germany’s national unity. Their leaders, Adolf Wagner and Gustav von Schmoller, [who were Heinrich von] Treitschke’s colleagues at the University of Berlin and equally influential in molding public opinion, shared Treitschke’s faith in the German power state and its foundations. They regarded the struggle against English and French political and economic liberalism as the German mission, and wished to substitute the superior and more ethical German way for the individualistic economics of the West . . . In view of the apparent decay of the Western world through liberalism and individualism, only the German mind with its deeper insight and its higher morality could regenerate the world.”

These German advocates of war and conquest also believed that Germany’s monetary system had to be subservient to the wider national interests of the state and its imperial ambitions. Austrian economist Ludwig von Mises met frequently with members of the Historical School at German academic gatherings in the years before World War I. He recalled:

“The monetary system, they said, is not an end in itself. Its purpose is to serve the state and the people. Financial preparations for war must continue to be the ultimate and highest goal of monetary policy, as of all policy. How could the state conduct war, after all, if every self- interested citizen retained the right to demand redemption of banknotes in gold? It would be blindness not to recognize that only full preparedness for war [could further the higher ends of the state].”

Germany’s Great Inflation began with the government’s turning to the printing press to finance its war expenditures. Almost immediately after the start of World War I, on July 29, 1914, the German government suspended all gold redemption for the mark. Less than a week later, on August 4, the German Parliament passed a series of laws establishing the government’s ability to issue a variety of war bonds that the Reichsbank – the German central bank – would be obliged to finance by printing new money.

The government created a new set of Loan Banks to fund private sector borrowing, as well as state and municipal government borrowing, with the money for the loans simply being created by the Reichsbank.

During the four years of war, from 1914 to 1918, the total quantity of paper money created for government and private spending went from 2.37 billion to 33.11 billion marks. By an index of wholesale prices (with 1913 equal to 100), prices had increased more than 245 percent (prices failed to increase far more because of wartime price and wage controls). In 1914, 4.21 marks traded for $1 on the foreign exchange market. By the end of 1918, the mark had fallen to 8.28 to the dollar.

Germany’s Hyperinflation and the Destruction of the Mark

But the worst was to come in the five years following the end of the war. Between 1919 and the end of 1922, the supply of paper money in Germany increased from 50.15 billion to 1,310.69 billion marks. Then in 1923 alone, the money supply increased to a total of 518,538,326,350 billion marks.

By the end of 1922, the wholesale price index had increased to 10,100 (still using 1913 as a base of 100). When the inflation ended in November 1923, this index had increased to 750,000,000,000,000. The foreign exchange rate of the mark decreased to 191.93 to the dollar at the end of 1919, to 7,589.27 to the dollar in 1922, and then finally on November 15, 1923, to 4,200,000,000,000 marks for the dollar.

During the last months of the Great Inflation, according to Gustav Stolper, “more than 30 paper mills worked at top speed and capacity to deliver notepaper to the Reichsbank, and 150 printing firms had 2,000 presses running day and night to print the Reichsbank notes.” In the last year of the hyperinflation, the government was printing money so fast and in such frequently larger and larger denominations that to save time, money, and ink, the bank notes were being produced with printing on only one side.

Finally, facing a total economic collapse and mounting social disorder, the German government in Berlin appointed the prominent German banker, Halmar Schacht, as head of the Reichsbank. He publicly declared in November 1923 that the inflation would be brought to an end and a new non-inflationary currency backed by gold would be issued. The printing presses were brought to a halt, and the hyperinflation was stopped just as the country stood at the monetary and social precipice of total disaster.

The Legacies of Tyranny, Paternalism and Lost Freedom

But the deaths, destruction, and disruptions of the First World War and its immediate aftermath were never fully recovered from. In 1922, Mussolini and his Fascist Party came to power in Italy. In 1933, Hitler’s Nazi movement took power in Germany in the midst of the Great Depression.

In the United States, also in 1933, Franklin D. Roosevelt’s New Deal ushered in the arrival of America’s version, at first, of a fascist-type planned economy, with a growing concentration of political control and economic paternalism in the form of the modern interventionist-welfare state in the postwar period that followed a worse and far more destructive and mass murdering Second World War. (See my article, “When the Supreme Court Stopped Economic Fascism in America.”)

Out of this second “war to end all wars,” came America’s role as global policeman and international social engineer during the Cold War with the Soviet Union. But even the post-Cold War era after the end of the Soviet Union in 1991 has seen part of the legacy of World War I in international affairs.

The wars and “ethnic cleansings” experienced in the former Yugoslavia in the 1990s, and at least part of the causes behind the current conflicts in the Middle East are outgrowths of the post-World War I peace settlements imposed by the victorious Allied powers.

But most importantly, I would suggest, is the lasting legacy out of the First World War that has been the rationales and implementations of paternalist Big Government in the Western world, with its diminished recognition and respect for individual liberty, free association, freedom of competitive trade and exchange, reduced civil liberties and weakened impartial rule of law.

From this has followed the regulating and redistributing State, which includes political control and manipulation of the monetary and banking systems to serve those in governmental power and others who feed at the trough of governmental largess.

It is a legacy that will likely take another century to completely overcome and reverse, if we are able to devise a strategy for restoring the idea and ideal of a society of liberty.

Why Government Deficits and Debt Do Matter, Richard Ebeling

The Congressional Budget Office (CBO) reported in early May that for the month of April 2015 the Federal government ran a budget surplus, taking in more in taxes than it laid out in expenditures. Don’t be fooled by one month, especially when it was a month when people filed and pay their taxes. Government deficits and growing debt are on the horizon for as far as the human eye can predict.

Yes, for right now the trillion-dollar-a year budget deficits that marked the first years of the Obama Administration have abated. For 2015 through 2017, the CBO projects that Washington’s budgetdeficits will be “only” in the range of $468 billion and $489 billion per year.

But after that, given current “entitlement” legislation for such programs as Social Security, Medicare and ObamaCare, the annual budget deficits will start rising again after 2017, and will be over a trillion dollars once more in 2025.

The CBO calculates that by 2025 these more “modest” annual budget deficits will cumulatively add over $7.5 trillion to the existing $18.3 trillion of Federal government debt, for a total a decade from now of almost $26 trillion. This will be more than a 40 percent increase in the Federal government’sdebt over the coming ten year period.

The per capita government debt burden for every American in 2015 is estimated to be about $58,000. In ten years, in 2025, based on demographic estimates of U.S. population growth, that per capita debt burden will have increased to nearly $78,000, for an almost 35 percent increase, while the U.S. population will have only increased by around 8 percent over the decade.

The Government’s Burden Equals What is Taxed and What is Borrowed

Does it matter that the government funds part of its expenses through deficit financing instead of simply raising taxes to cover all of its expenditures? Noble prize-winning free market economist, Milton Friedman (1912-2006), was adamant that what mattered was what government spent, not how it raised the money to pay for it:

“Keep you eye on one thing and one thing only, how much government is spending, because that’s the true tax . . . If you’re not paying for it in the form of explicit taxes, you’re paying for it indirectly in the form of inflation or in the form of borrowing. The thing you should keep your eye on is whatgovernment spends, and the real problem is to how down government spending as a fraction of your income, and if you do that, you can stop worrying about the debt.”

If the government taxes the citizenry, the dollars collected and the real resources those dollars have buying power over in the marketplace are transferred from private sector hands to the hands of Uncle Sam, who then decides for what they will be used.

But this is no less the case when the government borrows dollars in financial markets to cover part of its expenses in excess of collected taxes. Instead of a private borrower borrowing those dollars and using the real resources those dollars can buy in the marketplace for investment, capital formation or other purposes, the government borrows them and uses the real resources that can be bought with them for its own political-oriented goals and ends.

Either way, the total amount of the income and resources of the society transferred out of private hands and into the hands of the government is represented by the total spending by thatgovernment, even if part has been taxed and part has been borrowed.

Friedman once asked the question: Which is preferable, a situation under which the governmenttaxes and spends $800 billion with a balanced budget; or a situation in which it taxes $400 billion and borrows $100 billion for a total of spending of $500 billion, with a budget deficit?

In terms of the total extraction of wealth and income from the members of society by government, clearly its siphoning off $500 billion is preferable to it taking and using $800 billion of the resources and products produced through the peaceful and productive efforts of the citizen-taxpayers, Friedman reasoned.

America’s Former Balanced Budget Fiscal Rule, and Its Benefits

However, while it may be true that whether the government taxes or borrows the taxpayer-citizens are poorer by that total amount, it is nonetheless the case that government following a balanced budget rule versus a budget deficit expedient has a huge political difference on the institutional ease or difficulty of government growing over time.

Many years ago, Noble Prize economist, James M. Buchanan (1919-2013), and his colleague, Richard Wagner, wrote a book on Democracy in Deficit (1977). They pointed out that during the first 150 years of the United States, the Federal government followed what they referred to as an “unwritten fiscal constitution.”

There is nothing in the U.S. Constitution that requires the government to annually balance its budget. Such a balanced budget “rule” for managing the government’s spending and taxing was considered a way to assure transparency and greater responsibility in the financial affairs ofgovernment.

It was argued that a balanced budget made it easier and clearer for the citizen and the taxpayer to compare the “costs” and “benefits” from government spending activities.

Since each dollar spent by the government required a dollar collected in taxes to pay for whatever the government was doing, the citizen and taxpayer could make a more reasonable judgment whether they considered any government spending proposal to be “worth it” in terms of what had to be given up to gain the supposed “benefit” from it.

The trade-off, was explicit and clear: any additional dollar of government spending on some program or activity required an additional dollar of taxes, and therefore, the “cost” of one dollar less in the taxpayer’s pocket to spend on some desired private-sector use, instead.

Or if taxes were not to be increased to pay for a new or expanded government program, the supporter of this increased spending had to explain what other existing government program or activity would have to be reduced or eliminated to transfer the funds to pay for the new proposed spending.

There was an exception to this balanced budget rule, and that was a “national emergency such as a war, when government might needed large amount of extra funds more quickly than they could be raised through higher taxes.

But it was also argued that once the national emergency had passed, the government was expected to manage its finances to run budget surpluses, taking in more than it spent each year. The surplus was to be used to pay off the accumulated debt as quickly as possible to relieve current and future taxpayers from an unnecessary and undesirable burden.

Amazingly, in retrospect, this actually was the fiscal rule and pattern followed by the United Statesgovernment throughout the nineteenth century and into the twentieth century until the Great Depression in the 1930s.

CBO deficits

The Keynesian Call for Budget Deficits to “Stimulate” the Economy

However, starting with the 1930s, this unwritten fiscal constitutional was permanently overturned as part of the Keynesian Revolution. It was argued that the government should not balance its budget on a yearly basis. Instead, the government should balance its budget “over the business cycle.”Government should run budget deficits in “bad” years (recession or depression) and run budget surpluses in “good” years (periods of “full employment” and rising Gross Domestic Product).

This new “rule” of a balanced budget over the business cycle became a generally accepted idea for fiscal policy among many economists and government policy makers.

However, there has been one major problem with this alternative conception of the role and method of managing government spending and taxing: During the 70 years since the end of the Second World War in 1945, the U.S. government has run budget deficits in 58 of those years and had budget surpluses in only 12 years.

Hence, as Buchanan and Wagner referred to it, “democracy in deficit.”

With the elimination of the balanced budget “rule” as the guide for fiscal policy, it has been possible for politicians to create the economic illusion that is it possible to give voters “something for nothing” – a “free lunch.”

The Fiscal Illusion of Giving Voters Partly “Something for Nothing”

Politicians have been able to offer more and more government spending to special interest groups to obtain campaign contributions and votes in the attempt to be elected and re-elected to political office.

They can offer benefits in the present in the form of new or additional government spending, but they no longer have to explain where all the money will come from to pay for it. The “costs” of that deficit spending is be paid for by some unknown future taxpayers in some amount that can be put off discussing until that “some time” in the future.

Thus, politicians can supply benefits in the present – “now” – to targeted groups whose votes are wanted on election day, and avoid answering how the money will be paid back (with interest) because that can be delayed until the future – a period later in time, years ahead, when someone else will hold political office and will have to deal with the problem.

It is not as if the danger from unrestrained government borrowing was never warned about before John Maynard Keynes (1883-1946) made deficit spending a “virtue” in the name of “stimulating” the economy in his famous book, The General Theory of Employment, Interest, and Money (1936).

 

The Deficit Monster Cartoon

 Warnings about Deficits and Government Debt from Long Ago

The famous Scottish philosopher, historian and economist, David Hume (1711-1776), expressed the danger in his essay, “Of Public Credit” (1741), over two hundred and fifty year ago:

“It is very tempting to a minister [in the government] to employ such an expediency, as enables him to make a great figure during his administration, without overburdening the people with taxes, or exciting any immediate clamors against himself. The practice, therefore, of contracting debt will almost infallibly be abused, in every government. It would scarcely be more imprudent to given a prodigal son a credit in every banker’s shop in London, than to empower a statesman to draw bills[borrow money], in this manner, upon posterity.”

And almost 150 years ago, the American economist, Dudley Baxter (1827-1875), very clearly contrasted the incentives at work on those running for and holding political office when the institutional rule is a balanced budget versus deficit spending and accumulated debt in his book, National Debts (1871):

“When money is raised by taxation within the year for which it is needed, the amount that can be raised is limited by the tax-enduring habits of the people, and must be as small as possible in order not to provide discontent [among the voters]. By the same reason it must be spent economically, and made to go as far as possible.

 “But when the money is raised by loans, it is limited only by the necessity of the interest [payment] not to be too large for the taxable endurance of the people, or provoking their discontent. Hence the limits of borrowing are about twenty times larger than the limits to taxation, and an amount that is monstrous as a tax, is (apparently) a very light burden as a loan. In consequence, borrowing is freed from the most powerful check that restrains taxation . . .

 “When a loan is obtained the reason for economical expenditure is equally wanting, and borrowed money is commonly expended with much greater profuseness, and even wastefulness, than would be the case with taxes.”

Keynesian Economics served as an additional and powerful rationale for politicians to do what they like to do: spend other people’s money. In the process, it pushed aside the warnings of those like David Hume and Dudley Baxter, and many other economists, who understood clearly the dangers of unrestricted government authority to both tax and borrow.

 

Children Pulling Big Government Cartoon

 The Moral Dimension of Government Debt Financing

There is an additional moral dimension to the issue of government deficit spending and its resulting accumulation of debt. This was a theme especially addressed by economist, James M. Buchanan.

Normally, when a private individual or enterprise undertakes debt financing of some portion of his current expenditures, the legal obligation to pay back the contracted principle and interest falls upon the borrower. If he defaults or passes away before repayment of all that had been borrowed, creditors have a lien on the borrower’s positively valued assets.

The “benefits” of having the use of a greater sum of money in the present than his own income would enable him to spend, imposes on the borrower a “cost” of an obligation to pay back the loan out of his future income and assets. The cost and the benefit are linked together within the same person.

It is not the same, Buchanan argued, with government deficit spending and repayment of accumulated debt:

“If I borrow $1,000 personally, I create a future obligation against myself or my estate in the present value of $1,000. Regardless of my usage of the funds, I cannot, by the act of borrowing, impose an external cost on others. Unless I leave positively valued assets against which my debts can be satisfied, my creditors cannot oblige my heirs to pay off their claims.

 “By contrast, suppose I ‘vote for’ an issue of public debt in the amount of $1,000 per person. I may recognize that this debt embodies a future tax liability on some persons, but I need not reckon on the full $1,000 liability being assigned to me. If I leave no positively valued assets, the government’s creditors can still enforce claims on my progeny as members of the future-period taxpaying group.

 “Further, the membership in the taxpaying group itself shifts over time. New entrants, and not only those who descend directly from those of us who make a borrowing-spending decision, are obligated to meet debt, interest and amortization charges.

“In sum, the institution of public debt introduces a unique problem that is usually absent with private debt; persons who are decision makers in one period are allowed to impose possible financial losses on persons in future generations. It follows that the institution [government] is liable to abuse this and overextend its borrowing practices. There are moral and ethical problems withgovernment deficit financing that simply are not present with the private counterpart.”

Government debt is a way to impose part of the cost of what special interest group voters and politicians want “today” on those who “tomorrow” will have to be taxed to pay back the borrowed money.

Even if a current recipient of such governmental deficit spending largess is, himself, one of the future taxpayers, he is usually likely to have received a greater benefit than his personal portion of the future tax burden. Suppose that he is a farmer, for instance, who receives “today” $100,000 from the government for not growing a crop. When “tomorrow” comes and taxes have to be raised to pay back that $100,000 to the creditors who lent that sum to the government, that particular farmer’s additional tax burden will be a small fraction of that total amount.

To continue with the same example, many farmers who may have benefited from agricultural price-support programs decades ago have passed away. The burden of paying back whatever portion of that farm price support spending originally financed by deficit spending now falls upon others who even may not have been born at the time the recipient received this special privilege from thegovernment.

What is the ethics, James Buchanan asked, of a fiscal system under which incentives exist and come into play that enable the current generation of taxpayers and recipients of government programs to shift part of the burden to pay for them to future generations? Is that a culturally and economically healthy legacy to leave to our children and grandchildren?

 

Bailing Out Before Debt is Due cartoon

 The Importance of Balanced Budgets and Debt Limits

This is why it would desirable to incorporate a balanced budget amendment into the U.S. Constitution. It would not guarantee that government did not tax and spend more. But it would impose a greater clarity and transparency to the fiscal dimension of government decision-making that would make it far more difficult for those offering other people’s money in exchange for votes to do so without having to also explain who would be paying for the favors and privilege given to some, and how much they would have to pay.

In lieu of adding such an amendment to the Constitution, it is imperative that the Congress does not give up its authority to raise the Federal debt limit. While raising the debt limit has become more or less a rubber stamp after a number of congressmen make some verbal objections to moregovernment borrowing, the fact is if Congress were to ever have sufficient pressure from voting constituents to say, “NO,” that very act would impose a balanced budget on the Federalgovernment. Since once Uncle Sam had reached the hard debt limit, he would only be able to spend what he had taken in taxes.

This, combined with a strong educational and political campaign to reawaken the principles and ideals of individual liberty and limited government can bring the seemingly the unlimited growth in the size and scope of government to a halt.

Once halted, then a repeal and retrenchment drive can begin to reverse that size and scope of Big Government so to restore a society of freedom grounded in individual rights and economic liberty.

America’s Endless War Over Money, K. Granville and B. Applembaum (via NYT, 08/04/2015)

 

The “Audit the Fed” debate is the latest manifestation of a conflict as old as the nation, between those who argue that a strong central bank improves economic stability, and those who see an overbearing government engaged in harmful meddling.

Some Background: Strong vs. Weak Currency

Battles over central banking have historically pitted financial elites who wanted to limit the availability ofmoney, thus preserving its value, against farmers, businessmen and other borrowers who wanted money to be plentiful — and cheap. Each side has sometimes regarded the central bank as its great ally in that fight, and sometimes as its bitter enemy.

Since the Great Recession the Fed has mostly sided with the borrowers, creating vast amounts of newmoney and holding short-term interest rates near zero. Inevitably, that has angered creditors, and sparked efforts to swing the pendulum in the other direction.

http://i1.nyt.com/images/2015/03/24/business/fedtime4/fedtime4-master1050.jpg

A cartoon satirizing Andrew Jackson, shown raising a cane labeled “veto,” and his battle against the Bank of the United States and its supporters among state banks.

1. A Philadelphia Story: The Banks of the United States

The nation’s first two central banks, both called the Bank of the United States, were private, for-profit organizations chartered by Congress. The first (1791-1811) was created to help the government pay its Revolutionary War debt, stabilize the country’s currency and raise money for the new government. It was the dream of Alexander Hamilton, secretary of the Treasury, who overcame resistance from Thomas Jefferson (who wrote “I believe that banking institutions are more dangerous to our liberties than standing armies”) and other Southern lawmakers. When its 20-year charter expired, Congress chose not to renew it.

The Second Bank of the United States was chartered a few years later, in the aftermath of the War of 1812, after Congress decided it had a mistake. But it lasted just 17 years. President Andrew Jackson said the bank concentrated too much economic power with a corrupt moneyed elite and vetoed a bill to extend its charter in 1832. Supporters of the the bank rallied around Henry Clay, Jackson’s opponent for reelection that year, but the “Bank War” ended when Jackson won easily. United States Treasury funds were withdrawn and deposited in state banks; the nation would be without a central bank for more than 70 years.

The headquarters of both banks still stand about a block apart in downtown Philadelphia.

“The bank is trying to kill me, but I will kill it!”—Andrew Jackson.

 

http://i1.nyt.com/images/2015/03/20/business/fedtime3/fedtime3-master1050.jpg

Crowds gather across the street from a failed New York bank in 1908. CreditGeorge Grantham Bain Collection/Library of Congress

2. Perpetual Panic: Life Without a Central Bank

A severe financial crisis drove the economy into a deep recession in 1837, just one year after the demise of the Second Bank. Such crises became a recurring event in American life and, as the economy grew, so did their size and the frequency. Banks created the New York Clearing House as a private-sector backstop, but it proved inadequate for the task. The government also was hamstrung. In the absence of a central bank, the United States regulated the value of its currency by guaranteeing that dollars could be exchanged for gold, and sometimes silver. This meant the government could not respond to financial crises, and the resulting economic downturns, by increasing the supply of money.

In 1907, yet another crisis was brought about by a failed attempt to corner the stock of the United Copper Company. Government officials and financial executives jerry-rigged a response: an emergency lending pool orchestrated by J. Pierpont Morgan. But the crisis proved to be a tipping point in the political debate about the need for a central bank. There was a growing political consensus that Wall Street needed a permanent fire department.

“Unless we have a central bank with adequate control of credit resources, this country is going to undergo the most severe and far reaching money panic in its history.”—Jacob Schiff, a prominent New York banker, in 1907

 

http://i1.nyt.com/images/2015/03/24/business/fedtime-painting/fedtime-painting-master1050.jpg

President Woodrow Wilson signing the Federal Reserve Act of 1913, in a painting by Wilbur G. Kurtz Sr. He is surrounded by members of his cabinet and Congressional leaders. CreditWoodrow Wilson Presidential Library, Staunton, Va.

3. Third Time’s the Charm: The Federal Reserve Act of 1913

In November 1910, Senator Nelson Aldrich met with a group of bankers at a resort on Georgia’s Jekyll Island and hammered out a plan for a new central bank. The idea touched on many of the great political battles of the age: The states against Washington; Wall Street financiers against smaller banks, particularly in the South and West; populists against the Gilded Age elite. The bill that emerged from several years of debate, signed by President Woodrow Wilson, was an awkward compromise: There would be 12 privately owned reserve banks in major cities across the country, preserving the power of financial elites. But the banks would be overseen by a board of presidential appointees, including the Treasury secretary, granting the public a new measure of control over the financial system.

Before the Fed was fully established, however, the old system took a final bow. A financial crisis struck in 1914, and roughly twice as many banks failed as in 1907.

“We shall deal with our economic system as it is and as it may be modified, not as it might be if we had a clean sheet of paper to write upon; and step by step we shall make it what it should be.”—Woodrow Wilson, from his first inaugural address

 

http://i1.nyt.com/images/2015/03/24/business/fedtime5/fedtime5-master1050.jpg

In 1933, after some banks limited withdrawals to 5 percent or less, customers waited to enter the National City Bank in Cleveland. CreditAssociated Press

4. Recession and Response

Instead of preventing crises, the Federal Reserve helped to cause the Great Depression. The Fed was supposed to manage the gold standard — to make sure the economy was not choked by a lack of moneyand a resulting spike in interest rates. Instead, the Fed was paralyzed by disagreements between regional banks and the central board. It let the money supply shrink by one-third. The result was the worst economic crisis in the nation’s history.

Congress responded to the Fed’s failure by greatly increasing its power and responsibilities. In 1934 it authorized the president to devalue the dollar, beginning the long process of replacing the gold standard with a currency whose value is managed by the Fed. In 1935 it gave the Fed responsibility for “the general credit situation of the country.” The act also removed the Treasury secretary from the Fed’s board and created a new policy-making committee where board members would outnumber reserve bank presidents.

“I would like to say to Milton and Anna: Regarding the Great Depression. You’re right, we did it. We’re very sorry. But thanks to you, we won’t do it again.”—Ben Bernanke, then a Fed governor, in a 2002 speech addressing Milton Friedman and Anna Schwartz.

 

http://i1.nyt.com/images/2015/03/20/business/fedtime1/fedtime1-jumbo-v2.jpg

The Federal Open Market Committee in 1966, led by the Fed chairman, William McChesney Martin, seated center. CreditFabian Bachrach

5. The Long Road to Independence

The central bank now had the freedom to encourage growth by printing money, and the responsibility not to print too much. Politicians who were focused on short-term problems were quick to demand money and, for the next several decades, the Fed hesitated to say no.

In 1942, at the request of the Treasury Department, the Fed agreed to hold down interest rates on government bonds to help finance military spending for World War II. It kept rates low for almost a decade, through the beginning of the Korean War, until rising inflation finally induced the Treasury to sign a 1951 accord affirming the Fed’s autonomy to raise rates.

In the 1960s, Wright Patman, a populist Democrat congressman from Texas and chairman of the House banking committee, repeatedly introduced legislation to roll back the Federal Reserve Act of 1913, maintaining that, in the Fed, “a body of men exist who control one of the most powerful levers moving the economy and who are responsible to no one.”

And in 1965, President Lyndon B. Johnson, who wanted cheap credit to finance the Vietnam War and his Great Society, summoned Fed chairman William McChesney Martin to his Texas ranch. There, after asking other officials to leave the room, Johnson reportedly shoved Martin against the wall as he demanding that the Fed once again hold down interest rates. Martin caved, the Fed printed money, and inflation kept climbing until the early 1980s.

“I hope you have examined your conscience and you’re convinced you’re on the right track.”—Lady Bird Johnson, spoken to William McChesney Martin, on his arrival at the LBJ ranch.

 

http://i1.nyt.com/images/2015/03/26/business/fedtime-vocker/fedtime-vocker-master1050.jpg

Paul A. Volcker, shown in 2009. He was appointed Fed chairman in 1979 with the task of controlling galloping inflation. CreditBrian Snyder/Reuters

 

6. The Volcker Rule: An Independent Central Bank

Congress finally formalized its demands in 1978. A recession in the mid-1970s had pushed the unemployment rate as high as 9 percent, and Democrats, frustrated by what they saw as the Fed’s inadequate response, won passage of legislation establishing the so-called dual mandate. The Fed was instructed to pursue maximum employment and price stability.

It turned out to be a high-water mark for Congressional interference. Inflation rose by 11 percent the following year, and President Jimmy Carter agreed to appoint a new Fed chairman, the independent-minded Paul A. Volcker. Over the next several years, Mr. Volcker would raise interest rates sharply, driving the economy into a deep recession but ultimately bringing inflation under control. President Ronald Reagan, meanwhile, made a point of respecting the Fed’s independence. Volcker was still subjected to sharp Congressional pressure, but it was mostly political theater. The Fed had declared its independence.

“Every time he had a press conference somebody was urging him to take a slap at the Federal Reserve, but he never did.”—Paul Volcker, referring to President Reagan.

 

http://i1.nyt.com/images/2015/03/27/business/fedtime-newser/fedtime-newser-master1050.jpg

Ben Bernanke, the Fed chairman, takes questions from reporters at an April 2011 news conference. CreditJim Watson/Agence France-Presse — Getty Images

 

7. Smokescreens and Sunshine: The Fed Opens Up

Between the great inflation of the early 1980s and the Great Recession that began in 2008, the Fed and the economy enjoyed more than two decades of relative peace and quiet, a period that Fed officials sometimes call the Great Moderation. Inflation trended downward and, except for a few short recessions, unemployment stayed down too. And Fed officials came to see these trends as a validation of their newfound independence.

The Fed also began to change its secretive culture. The trend began reluctantly, under pressure from critics who argued that independence required transparency. In 1983, for example, the Fed promised Congress that it would begin to release its Beige Book, a summary of economic reports from its regional reserve banks, as a way of distracting attention from more important reports that it was determined to keep secret. But the Fed gradually concluded that transparency could increase the power of monetary policy. In 1994, it began to announce changes in policy at the end of each policy-making session. In 2004, it began to publish edited accounts of its discussions three weeks after each session. And in 2011, its chairman, Ben S. Bernanke, began to hold quarterly news conferences.

“Since I’ve become a central banker, I’ve learned to mumble with great coherence. If I seem unduly clear to you, you must have misunderstood what I said.”—Alan Greenspan, Fed chairman, in 1987, before the central bank’s communications revolution.

“The Federal Reserve is the most transparent central bank to my knowledge in the world. We have made clear how we interpret our mandate and our objectives and provide extensive commentary and guidance on how we go about making monetary policy decisions.”—Janet L. Yellen, Fed chairwoman, in 2014, after the communications revolution.

 

http://i1.nyt.com/images/2015/03/26/business/fedtime-protest/fedtime-protest-master1050.jpg

Protesters in April 2009, outside an event where Ben Bernanke, the Fed chairman, was speaking. CreditJason Miczek/Reuters

 

8. The Great Recession, and ‘Audit the Fed’

The Fed’s long run as a political darling came to a crashing end in 2008. Its lax oversight of the financial system was one reason for the severity of the crisis, and the smartest guys in Washington had failed to see it coming. The Fed’s response was also controversial: It provided expansive support for the financial system, preserving some of America’s least popular companies, not to mention foreign banks. And then it embarked on an expansive stimulus campaign to revive the economy.

In the aftermath of the crisis, Congress moved quickly to strengthen the Fed’s regulatory responsibilities. It also imposed some limits on the Fed’s ability to repeat its rescue of the financial system. But it is the stimulus campaign that has prompted the most controversy.

In an inversion of the historical pattern, congressional Republicans have criticized the Fed for printing too much money, arguing higher inflation will be the inevitable consequence. And they have put forward proposals to constrain the central bank. One bill, known as “Audit the Fed,” would authorize the General Accountability Office to review the Fed’s monetary policy decisions. Another approach, backed by the House Financial Services Committee, would require the Fed to publicly articulate a set of rules it intends to follow in making monetary policy, and then explain any deviations.

“The Federal Reserve System must be challenged. Ultimately, it must be eliminated. The government cannot and should not be trusted with a monopoly on money. No single institution in society should have power this immense.”—From End the Fed (2009) by Ron Paul.

The Greek Monetary Back-Story, Jim Grant (17/02/2015)

Raging against its German creditors, the new Greek government is demanding reparations for Nazi-era depredations. Herewith—from the Grant’sarchives—some timely context both for the Greek negotiating position and the underlying monetary issues.

(Grant’s, February 24, 2012) “Statements and assurances from Greece are no longer taken at face value,” a German economics professor, Wolfram Schrettl, has remarked. “Who will ensure afterward that Greece continues to stand by what Greece is agreeing to now?” the German finance minister, Wolfgang Schäuble, has demanded.

Such expressions of German disdain ignite a special kind of fury in Greece. While 21st-century Greek fiscal and financial management may leave a little something to be desired, the record of German monetary stewardship in the Hellenic Republic is supremely worse. During Nazi occupation in World War II, Greece suffered famine, pestilence, wholesale killings and hyperinflation. The last-named plague is the topic at hand.

Let bygones be bygones, they say, and well they might say it in Europe, the land of ancient enmities. However, there can be no understanding the present-day Greek sensitivity to its high and mighty creditors without a rudimentary knowledge of the German-inflicted catastrophes of 1941-44. Nor can there be a full and proper appreciation of the risks inherent in paper money without a basic grounding in such abominations as the occupation-era Greek drachma or, for that matter, the post-occupation drachma—for the liberated Greek central bank took up where the German-corrupted central bank left off. Fiat currency can’t seem to help itself. The insubstantial monetary material sooner or later goes up in smoke, no matter whose hand cranks the presses. These days, of course, the cranking hand is a technocratic one. “Quantitative easing” is the anodyne phrase. Yet in peace as in war, gold is the preferred refuge from state-imposed paper currency.

According to Mark Mazower’s scholarly history, “Inside Hitler’s Greece: The Experience of Occupation, 1941-44,” between 250,000 and 300,000 Greeks died from famine at the hands of the German overlords. “In reality,” Mazower writes, “there was no deliberate German plan of extermination.” The extermination that did occur was rather the result of the calculated destruction of the Greek economy and the stripping of the Greek larder for the Axis armies, the German one in particular. “Who is Mr. Schäuble to revile Greece?” the 82-year-old president of Greece, Karolos Papoulias, demanded last week in response to the German finance minister’s slighting comments about the country for which a teenaged Papoulias fought in World War II.

Famine was a certain, if not deliberately sought, consequence of German occupation policy, but there was nothing accidental about the destruction of the drachma.The German-controlled Bank of Greece printed up the national currency as the need arose. In the opening months of 1941, before the Germans (and the Italians and Bulgarians) came to stay, a British sovereign was worth 1,200 drachmas. As the Germans cleared out, in November 1944, blowing up railroad tunnels, rolling stock, harbors and such as they left, a sovereign commanded 71 trillion drachmas.

A sovereign is a gold coin weighing not quite one-quarter ounce—to be exact, 0.23542 troy ounce. When Britain was on the gold standard, a sovereign was worth one pound sterling, and it circulated as the people’s money. It was a popular coin in Greece, too, as Britain and Greece had joined monetary forces in 1928. Three years later, Britain went off the gold standard, and in 1932, Greece and Britain ended their so-called stabilization relationship. Cut loose from gold, the paper pound began its long descent in purchasing power measured in gold. However, from the Greek vantage point, paper sterling was a better anchor for the drachma than no anchor at all, and in 1936 the Greeks re-lashed their currency to Britain’s, at the rate of 548 drachmas to the pound.

Fast-forward now to the outbreak of war in Europe in 1939. As the pound came under new inflationary pressure, so did the drachma. In Athens, the cost of living was accelerating well before Hitler mounted his attack on Greece in April 1941. In 14 months of neutrality, prices in the Greek capital had jumped by 15%.

Nowadays, Germany is the national face of monetary and fiscal rectitude. It wore a different face in wartime Greece, though the German army of occupation did observe some of the basic commercial forms. “Rather than requisition all required goods and facilities,” write Dimitrios Delivanis and William C. Cleveland in their “Greek Monetary Developments, 1939-1948,” “the occupation armies usually preferred to pay with newly created currency.”

The German visitation lasted for 3-1/2 years, but the real monetary damage was done in the first 18 months. In April 1941, an index of the cost of living in Athens registered 116. In October 1942, the same index stood at 15,192, a gain—if that’s the word—of almost 13,000%, or an average monthly rate of rise of 722%, according to Delivanis and Cleveland. It didn’t help the price picture that the Greek economy was crippled or that the Germans were making off with whatever wasn’t nailed down to aid the Axis war effort. What, especially, didn’t help the price picture was the breakneck growth in the local money supply, up roughly 10-fold between May 31, 1941, and Oct. 31, 1942, or the fact that, in 1942-43, newly printed drachmas financed 81% of public expenditures.

During this first act in the play of the death of the drachma, the currency’s domestic purchasing power fell by 99.34%, its external purchasing power—expressed in terms of the gold sovereign—by 99.73%. These facts we commend to the 21st-century gold bulls on those discouraging days when the eternal monetary metal seems to trade as a proxy for the euro. It isn’t the euro, after all, but almost the opposite. It is money, the genuine article.

“It must be concluded,” write Delivanis and Cleveland, “that the almost complete collapse of the value of the drachma, both internally and externally, was largely the result of enemy exploitation. The enemy occupation authority seized all stocks of commodities that were discovered, exploited for its own benefit the productive facilities and capital equipment of the country, confiscated and exported as much as possible of the current production, and extorted, as occupation costs, payments equivalent to 7,674 million prewar drachmas between May 1, 1941, and March 31, 1942, 2,287 million prewar drachmas between April 1, 1942, and Oct. 31, 1942.”

No bear market is complete without a trick rally, an Act II, and the terminal decline in the Greek currency was no exception. News of the Allied victory at El-Alamein in October-November 1942 caused a rush out of gold into scrip. A sovereign had fetched 37,144 drachmas before the battle that Churchill famously characterized as not the end of the war, nor even the beginning of the end of the war, but, “perhaps, the end of the beginning.” By February 1943, it took just 14,180 drachmas to buy a sovereign—as it turned out, not a bad entry point for the final move up to an average of 71 trillion.

Act III of the eradication of the drachma resembled Act I but with the addition of many more commas and zeros to all the significant currency and inflation data. Hopes of early deliverance from the Nazi occupation dashed, Greeks resigned themselves to the likelihood of a replay of the Weimar inflation of 1922-23, an earlier episode of German-directed monetary chaos. As noted, the Athens cost-of-living index stood at 116 on the eve of the German occupation. It registered 76,171 in November 1943 and 18,850,000,000,000 in the first 10 days of November 1944.

“During the final period of the enemy occupation of Greece,” the Delivanis and Cleveland account continues, “the index of note issue by the Bank of Greece rose to fantastic heights. During the period of 18 months and 10 days [i.e., May 1943 til Nov. 10, 1944], the index increased in magnitude 11,214,823 times, i.e., from 7,368 to the value of 82,630,830,289. The total increase in the period was 1,121,482,300%, representing an average monthly increase of more than 62 million percent as compared with the average monthly increase of 60% during the first period of enemy occupation from May 1941 to October 1942, and the average monthly increase of only 22.5% during the succeeding period from November 1942 to April 1943. The tremendous expansion of the note issue was caused by the growth of public expenditures, principally on account of the enemy occupation and by the cumulative, self-reinforcing effects of monetary inflation.” Toward the end of the German stewardship, the Bank of Greece printed 99% of the receipts of the Greek treasury.

Gold and foreign bank notes were the de facto coin of the realm. The British Middle Eastern forces funneled an estimated 700,000 sovereigns to Greek guerillas. And the Germans, in a vain attempt to tamp down the raging inflation rate, sold gold in exchange for drachmas—as many as 1,300,000 sovereigns in 1943 and 1944. It was the bright idea of the head of the German economic mission to Greece, Hermann Neubacher, to drop sovereigns on the Greek market to try to buck up the drachma. “Astonished Greek businessmen started to question, should we be buying gold, or selling it ourselves?” relates Michael Palairet in his history, “The Four Ends of the Greek Hyperinflation, 1941-1946.” “Buying gold” turned out to be the correct answer.

At a glance, the Greek hyperinflation would seem a pale copy of the Weimar episode. The size of the drachma money supply as the Germans scuttled home in 1944 was a mere 826,308,303-fold greater than the size of the money stock in the year before the outbreak of war in 1939. As for Weimar, marks in circulation in 1923 were 3,250,000,000-fold greater than the German money stock in the 12 months preceding the outbreak of war in 1914. However, note Delivanis and Cleveland, the Greek catastrophe was six years in the making as against nine for the German one. Besides, they say, as the curtain fell on the Greek tragedy, only one-third of Greeks were still transacting in the worthless national scrip, whereas, up to the bitter end, nine-tenths of the German population continued to use marks.

It can’t be said that Greek monetary management represented much of an improvement over the German kind. Having seen off the enemy, the Greek authorities proceeded to print money—new drachmas—with the note issue climbing to 25,762 million from 126 million. The gold bull market and the cost of living in Athens both resumed their upward course. As for the Greek treasury, now crippled by a ferocious civil war, it liberally availed itself of the fruits of the central bank’s printing press. (“Early during the occupation,” Palairet writes, “the German authorities tried to get the Greek government to reform its system of tax collection, but wrote off the effort, such as it was, as unavailing.”)

Sixty-odd years later, the monetary scenery is transformed. A peaceful Europe is united, more or less, under a single currency. A single central bank aims for a rate of inflation in the neighborhood of 2%—no scientific notation required to calculate the rate of currency debasement these days.

However, in the all-important realm of monetary ideas, not so much has changed. Today, as in the war, government-controlled central banks print up the money with which to finance, directly or indirectly, burgeoning fiscal deficits. Today, as in the war, governments have recourse to “financial repression,” e.g., zero-percent funding costs and QE. And today, as in the war, investors with eyes to see are busily exchanging fiat currencies for tangible stores of value. Plus ça change, as they say in Athens.

A predictable pathology, Benjamin M. Friedman (11/02/2015)

We meet at an unsettled time in the economic and political trajectory of many parts of the world, Europe certainly included. In Europe in particular, the setting is neither usual nor welcome. Germany’s finance minister Wolfgang Schäuble has called last month’s elections for the European Parliament “a disaster,” going on to conclude that “all of us in Europe have to ask ourselves what we can do better … we have to improve Europe.” To be sure, an election is a political event. But just as surely, here and now as in other times and places, what underlies the politics is to a large degree the economics. What is happening in many parts of Europe today is not just a pathology, but the predictable pathology that ensues whenever the majority of any country’s citizens suffer a protracted stagnation in their incomes and living standards.

The origins of this stagnation, in the parts of Europe where it is occurring, are broadly understood. More than half a decade ago, Europe imported the backwash of the financial crisis spawned in the American mortgage market and the US banking system more generally. Factors idiosyncratic to one European country or another – fiscal imbalance, eroded competitiveness, an American-style construction boom, an excess of impaired bank assets, and the like – rendered some parts of Europe especially vulnerable. In the familiar way, both monetary and fiscal policies likewise played a role (although in this context it is not clear what one means by a European fiscal policy). But a large part of the story too bears on the subject of today’s conference – “Debt” – and, in particular, the sovereign debt crisis that Europe has also now been confronting for more than half a decade.

The euro area constitutes a remarkable experiment in this regard. The fact that it is a monetary union without a fiscal union behind it is of course entirely familiar. But a seldom discussed implication of this anomaly is that the euro area economy has no government debt. By “government debt” I mean obligations issued by a public entity empowered to print the currency in which the obligations are payable. All other major economies we know – the United States, the United Kingdom, Japan, Sweden, Switzerland and many others – have government debt in this sense. In the euro area, by contrast, public sector debt is entirely what Americans call “municipals” – that is, obligations issued by public entities not authorised to print the currency owed. It is this feature that makes the bonds issued by Massachusetts, or New York, or Texas, subject to default in a way that US government debt is not. The bonds of all euro area states, even those currently regarded as most secure, like Germany’s, are likewise subject to default in the same sense. It would be difficult to exaggerate how unusual an experiment this situation represents. I am unable to think of another modern example of a major economy with no government debt to anchor its financial structure.

A further unusual aspect of Europe’s situation in this regard is that, following the various actions taken to date, what amounts to municipal debt issued by some of the entities whose fiscal condition is the weakest is, increasingly, owed not to market investors generally but to official lenders. This ownership matters because, unlike private market investors, official lenders in principle do not accept defaults. To a certain extent, of course, this is a fiction. But widely maintained fictions often guide actions, especially in public decision-making, and sometimes they do so with highly unfortunate consequences. This particular fiction also strengthens the commonplace European presumption – which strikes many Americans as bizarre – that sovereign default by a euro area member state would necessarily trigger the country’s exit from the currency union. From time to time in America’s history, US states have defaulted on their general-obligation bonds, and it may happen again. In the recent financial crisis, the two states whose bonds the market deemed most at risk were Illinois (because of unfunded pension obligations) and California (because of the state’s overall budget imbalance at the time). It would not have occurred to an American that if, say, Illinois defaulted on its GO bonds it would, on that account, have to exit the dollar currency union. But this principle seems to be the working assumption in much of the current European conversation.

The route by which Europe arrived at this situation is also well known. The governments of fiscally strong countries lent, or gave, funds to the governments of fiscally weak countries, allowing them to service their existing debt and to issue new debt. (This process also allowed the governments of the fiscally strong countries in effect to bail out their lending institutions without acknowledging that they were doing so, thereby maintaining yet another fiction that may or may not be useful.) The fiscally strong countries provided these transfers and new credits mostly in exchange for imposition of contractionary fiscal policies – and, supposedly, structural reforms – in the fiscally weak countries, in both cases with the goal of rendering them better able to manage their debt. But the problem with the former is that, despite economists’ ability to devise theoretical demonstrations to the contrary, contractionary fiscal policy actually is contractionary. The problem with the latter is not just that structural reforms are politically difficult to implement, but that even when implemented they take a long time to become expansionary. Moreover, even then they are often expansionary in a highly non-neutral way, exacerbating already unwelcome trends in income distribution.

In a group consisting mostly of economists, it is useful to recognise that this approach to Europe’s debt crisis, and even more so the underlying attitudes it reflects, are counterintuitive in yet another way. The standard presumption in economics, dating to the conception of “commerce” articulated by David Hume and Adam Smith and their contemporaries, is that market transactions involve two parties, each of whom acts voluntarily and with sufficient information to make a choice. In the case of credit transactions, this means presuming that both borrowers and lenders acted voluntarily. Among borrowers there are familiar exceptions such as the inherited debt of deceased parents, or the “odious debt” issued by a country’s prior regime, and for just this reason they are normally treated differently. Similarly, there is a stronger case for the presumption of informed voluntariness on the part of institutional lenders than individuals, and this difference in information and expertise provides a standard rationale (along with risk diversification) for financial intermediation. By contrast, today’s public discussion surrounding the European sovereign debt crisis mostly presumes that when a bond is in trouble, the lenders – especially institutional lenders – are victims. In parallel, there is an almost religious presumption of guilt among the borrowers.

From a historical perspective there probably is something religious about these presumptions. Although Jews and Christians and Muslims long regarded lending with suspicion (and Muslims still do), by the beginning of the 19th century evangelical Protestants had mostly come to regard borrowing as sinful, even when the debt was serviced and repaid on a timely basis. Non-payment, of course, elevated the negative moral connotation to  a  whole  different  plane.  As  the 19th century moved on, in one European country after another (and in America too) the active frontier of this debate was often the movement to introduce limited liability for what we now think of as corporate borrowers and equity investors: limited liability represented a retreat from what historians often refer to as the “retributive philosophy” of 19th century evangelicalism. By mid-century, public attitudes had begun to change, driven in large part by the new awareness of the possibilities for ongoing economic growth and waning ambivalence toward it. Even so, the lingering opprobrium attached to borrowing persisted, especially in the public sector context. As one long-ago historian of HM Treasury described this development, “An ethic transmuted into a cult, this ideal of economical and therefore virtuous government passed from the hands of prigs like Pitt into those of high priests like Gladstone. It became a religion of financial orthodoxy whose Trinity was Free Trade, Balanced Budgets and the Gold Standard, whose Original Sin was the National Debt. It seems no accident that ‘Conversion’ and ‘Redemption’ should be the operations most closely associated with the Debt’s reduction.”

Today a reversion to the “retributive philosophy” of the 19th century – to the view, in the words of another historian of that day, that “a just economy was more to be sought after than an expanding one” – is clearly in evidence in Europe’s approach to its  sovereign debt crisis. Whether  Europe’s  economy  has  thereby achieved justice is a matter for a different discussion. It has clearly foregone expansion. The imposition of contractionary policies in the most heavily indebted countries has reinforced a perverse feedback between weak economies and questionable sovereign debt, with a further feedback between both of those and troubled banks. Cross-border lending has significantly contracted, and some countries face what amounts to a credit crunch despite the ECB’s expansionary monetary policy. Nor are these simply isolated phenomena, with little bearing on the broader European economy. Back when I was first teaching economics, a plausible exam question was “Why is unemployment in Europe always so much greater than in the United States?” Then, for some years, asking the question in the opposite direction seemed more apt. Today, with the euro zone unemployment rate roughly double that in the United States, we can bring out the old exams again.

The more fundamental consequence is ongoing stagnation of incomes and living standards for the majority of the population in many European countries. The median household income in the United Kingdom, adjusted for what little inflation there has been, peaked in 2007 and has yet to regain that level. France, Italy and the Netherlands have not experienced complete stagnation by this measure, but the real median income in each has seen only a minimal increase. Ireland, Greece and Portugal have all experienced stagnation, or worse, in real median income over this period. Spain did too for half a decade, only last year finally enjoying a solid increase.

A parallel stagnation of incomes has taken place in the United States as well, but America’s federal fiscal structure provides at least some built-in cushioning mechanisms that Europe lacks. Further, in Europe’s fiscally weak countries the usual frustration over stagnant incomes and living standards is today compounded by the sense of being dictated to, in many citizens’ eyes perhaps even exploited, by foreigners. Twenty-five centuries or so ago, if another city-state had conquered the Athenians the then-conventional tribute would have required some hundreds of Athens’s finest youth to trek off to the victors’ lands, to do forced labour, and an equal number of Athens’s fairest virgins to go as well, for purposes best left unspecified. Today’s political conventions are sharply different, but the resulting youth labour flows are similar.

And, as Mr. Schäuble has highlighted, the all-too-familiar consequence of this economic stagnation, together with the widespread absence of employment opportunities, is a turn away from (small-L) liberal values toward xenophobic populism of either the right or the left. The same pathology has emerged before, again and again, in one country after another around the world, whenever the citizenry has lost its sense of forward progress in its material living standard, and lost too the optimism that that progress will resume any time soon. Europe today increasingly looks to be on the verge of repeating key elements of the experience of the years between the two World Wars, with not only the ascendency of extremist political movements but cross-border communication among them. There are differences, of course: in the 1930s the central node of that communication was the rising Nazi movement and then government in Germany, while today it looks as if the facilitating vehicle will instead be the European Parliament. But the effects are parallel, and so are parts of these groups’ programs, today including the campaign to roll back within-EU immigration and EU regulatory authority, not to mention the entire European Union project.

With European monetary policy already expansionary – with the introduction just last month of a negative redeposit rate, innovatively so – and since Europe as such has no fiscal policy, the urgent need today is for debt restructuring and relief for the fiscally weak European countries (and it is useful to recall that in real time it is often hard to tell the difference between the two). In a similar way, in the United States today there is need for relief for underwater homeowners whom the bail-out of US lenders a half-decade ago largely neglected. But the need in Europe is more acute.

Again looking back to the interwar period, there is ample precedent, within Europe, for both debt relief and debt restructuring. Indeed, that experience is also the origin of our host institution this evening. The reparations due from Germany under the Versailles treaty were quickly transformed into the obligation to service two series of bonds, scaled to reflect the recovering country’s ability to pay; but in the end neither bond was ever fully paid. Initially, the Weimar government serviced the bonds to foreign investors at the same time as German states and local governments were borrowing from abroad, so that on net the international flows were mostly recycling while within Germany there was substantial intergovernmental shifting of burdens. The 1924 Dawes Plan and then the 1929 Young Plan further reduced what Germany owed, and each arranged for yet a new foreign loan. The need to facilitate transactions under the Young loan is what led, in 1930, to the creation of the Bank for International Settlements.

The Lausanne Conference in 1932 ended all German reparations payments, in exchange for which Germany deposited with the BIS bonds representing a small fraction of what was originally due; the bonds were never issued, and some years later the BIS burned them. By then Germany had acquired other foreign debts, however. The Nazi government initially serviced the debt but blocked the conversion of the Reichsmarks paid into foreign currency. It then began making payment half in Reichsmarks and half in non-convertible Reichsbank scrip. After a series of further steps, in 1934 Germany defaulted on both the Dawes and the Young loans.

After the war, the 1953 London Debt Conference took up the matter of Germany’s unfulfilled commitments, including government debt, state and local debt, and even private debt. The London agreement reduced the amount due by at least half (most likely more, depending on the calculation) and rescheduled the remainder so that no principal payments were due for five years and the rest strung out over 30 years. A significant part of the debt was further deferred, with no interest due along the way, until such time as reunification might occur – which turned out to be nearly four decades later. The United States also converted into grants most of the loans extended under the Marshall Plan, in parallel with treatment of the other recipient countries, and did the same for loans under the Government and Relief in Occupied Areas programme.

As one historian summarized the approach taken to Germany’s post-war debt relief, “at the time of the London conference most observers had in mind long years of what they viewed as Germany’s irresponsible treatment of foreign debts and property owned by foreigners.” Nonetheless, “The entire agreement was crafted on the premise that Germany’s actual payments could not be so high as to endanger the short-term welfare of her people … reducing German consumption was not an acceptable way to ensure repayment of the debts.” The contrast to both the spirit and the implementation of the approach taken to today’s overly indebted European countries is stark.

There is no economic ground for Germany to be the only European country in modern times to be granted official debt restructuring and debt relief on a massive scale, and certainly no moral ground either. The supposed ability of today’s most heavily indebted European countries to reduce their obligations over time, even in relation to the scale of their economies, is likely yet another fiction – and in this case not a useful one. As the last decade’s financial crisis fades into the past, and market interest rates move up to a more normal configuration, these countries and others too will find their debt increasingly difficult to service. In the meanwhile, the contractionary policies imposed on them are depressing their output and employment, and their tax revenues. And the predictable pathology that follows from stagnant incomes and living standards is already evident.

James Tobin often remarked that there are worse things than three percent inflation, and from time to time we have them. Indeed, we just did. In the same vein, there are worse things than sovereign debt defaults, and from time to time we have them too. They are in progress as we meet.