Friday, March 6, 2026

The Administrative State vs. Constitutional Freedom

One of several continuous threads which run through the history of the United States from 1789 onward is the question of how to preserve the constitutional system of the “checks and balances” by means of the “separation of powers” as embodied in the judicial, legislative, and executive branches. This much should be familiar to any student of history or civics.

Preserving this constitutional system amounts to preserving the human rights, civil rights, and civil liberties which it was instituted to protect. The task of preserving this system often takes the form of asking whether each of the three branches is performing its tasks and avoiding the tasks assigned to the other branches.

A branch of government is negligent if it fails to perform its assigned tasks; it is usurpative if it performs tasks assigned to other branches.

It is therefore necessary, not only to limit the power of government as a whole, but rather also to limit within the government power of each part.

Opposing this constitutional system are those who embrace the idea of an administrative state. These two views compete: on the one hand, a central bureaucracy which makes laws and collects taxes apart from a freely-elected representative legislature; on the other hand, a decentralized system, in which federal and local governments each have separate assigned powers, which avoids permanent bureaucracy and standardized administration.

The administrative state comes into conflict with constitutional rights when the agencies of the executive branch make laws and collect taxes, usurping the role of the legislative branch. This leaves the citizens at the mercy of unelected agencies who do not represent the voters, but rather impose on the voters.

The need for limited government arises from the same source as the need for separation of powers: from human nature. The goal in the creation of such systems is to prevent situations in which individuals or groups have positions of great power, because sooner or later they will fall to the temptation to use that power in ways which do not represent the desires and thoughts of the voters, as John Marini writes:

James Madison wrote in The Federalist Papers that factionalism is “sown in the nature of man”; thus there will always be political conflict — which at its starkest is a conflict between justice, the highest human aspiration concerning politics, and its opposite, tyranny. This conflict between justice and tyranny occurs in every political order, the Founders believed, because it occurs in every human soul. It is human nature itself, therefore, that makes it necessary to place limits on the power of government.

President Woodrow Wilson wanted to take some of the budgetary powers assigned to the legislative branch. To this end, he proposed legislation which would reassign those powers to the executive branch. He vetoed the first version of this in 1920, because he thought that it did not give enough power to the president. A revised version, giving that power to the president, was passed by Congress in 1921 under the title “The General Accounting Act of 1921” and signed into law by President Warren Harding.

The passage of this bill was part of the progressive political agenda — Wilson was a leader in the progressive movement — and did damage to the constitutional system. Ironically, the anti-progressive Harding was the one who signed it into law. It will be left as an exercise for the reader to discover why Harding signed it.

The progressive vision was that government should control, rather than be controlled, as John Marini explains:

Progressive leaders were openly hostile to the Constitution not only because it placed limits on government, but because it provided almost no role for the federal government in the area of administration. The separation of powers of government into three branches — the executive, the legislative, and the judicial — inhibited the creation of a unified will and made it impossible to establish a technical administrative apparatus to carry out that will. Determined to overcome this separation, one of the chief reforms promoted by early Progressives was an executive budget system — a budget that would allow Progressive presidents to pursue the will of a national majority and establish a non-partisan bureaucracy to carry it out. Congress was initially reluctant to give presidents the authority to formulate budgets, partly because it infringed on Congress’s constitutional prerogative — but also because it was still understood at the time that the separation of powers stood as a barrier to tyranny and as a protection of individual freedom. Eventually, however, Congress’s resistance weakened.

When laws are made by the executive branch instead of by the legislative branch, they are unconstitutional and therefore illegitimate. In an attempt to hide this fact, many such laws, made by unelected officials in federal agencies, are labeled as “rules” or “regulations” instead of laws. When agencies impose penalties on those who violate such regulations, the agencies are usurping the role of the judicial branch, and such trials and their verdicts and sentences are therefore also unconstitutional and illegitimate. When these agencies collect taxes, which are labeled as “user fees” or other similar misleading phrases, they are usurping Congress’s exclusive right to levy taxes. Despite the attempt to label them as something other than taxes, they are in fact taxes, and are illegitimate and unconstitutional because again the executive branch has stolen the role of the legislative branch.

To defend this violation of the principle of the separation of powers, substantial mental gymnastics are required, as Philip Hamburger notes:

The Constitution authorizes three types of power, as we all learned in school — the legislative power is located in Congress, executive power is located in the president and his subordinates, and the judicial power is located in the courts. How does administrative power fit into that arrangement?

The answer is that laws, even when they are called ‘regulations’ and ‘rules’ or other technical words, are to be produced by the legislative branch, not the executive branch. ‘User fees’ are taxes and are therefore to be levied only by the legislative branch. ‘Hearnings’ which are trials are to be conducted only by the judicial branch, and ‘fines’ which are rulings are to be declared only by the judicial branch.

Administrative power as a standalone concept is neither constitutional nor legitimate.

Pinpointing the precise moment at which ‘administrative law’ began is difficult, but in any case, it is at least more than a century old. The concept of administrative law is entrenched and ossified, even though plainly corrupt and usurpatory.

Most often, the alleged need for administrative law is introduced as a practical necessity. It is practical and necessary only if one wishes to increase the power of government — and when government power increases, individual freedom decreases. The principle of limited government is intended to prevent the government from becoming an efficient manager or a practical regulator.

Citizens don’t want to be managed or regulated. Citizens elect representatives, not rulers. Those in government are to represent the ideas and desires of the citizens.

One of the several benefits of the separation of powers, and of checks and balances, is gridlock. Gridlock is a benefit and an asset. It prevents the government from becoming too adept at imposing control on citizens. One of the goals of the constitutional system is to maximize personal freedom and individual political liberty.

The progressive movement casts itself as modern, and therefore its opponents as reactionary and retrograde. This is, however, merely a verbal flourish, as Philip Hamburger writes:

The conventional answer to this question is based on the claim of the modernity of administrative law. Administrative law, this argument usually goes, began in 1887 when Congress created the Interstate Commerce Commission, and it expanded decade by decade as Congress created more such agencies. A variant of this account suggests that administrative law is actually a little bit older — that it began to develop in the early practices of the federal government of the United States. But whether it began in the 1790s or in the 1880s, administrative law according to this account is a post-1789 development and — this is the key point — it arose as a pragmatic and necessary response to new and complex practical problems in American life. The pragmatic and necessitous character of this development is almost a mantra — and of course if looked at that way, opposition to administrative law is anti-modern and quixotic.

Although the progressives presented the idea of an administrative state, or a managerial state, as something modern, it was in fact a step backward in time, to the governments of kings like Frederick the Great, who were called ‘enlightened despots’ and ‘enlightened absolutists’ because they considered themselves to be wise and therefore entitled to enforce upon their subjects whichever regulations occurred to them.

What is truly modern is a limited government, which regards freedom as the property of each human being. This replaces the monarchist view which regards the right to rule as the inherited family property of one ruler.

Although reason and justice are violated when one branch of the government seizes the powers assigned to another branch, reason and justice are equally violated when one branch of government is negligent and fails to carry out its assigned role. It is an illegitimate government when the executive takes on the legislative task; it is equally illegitimate when the legislative branch fails to legislate, as Christopher DeMuth writes:

Part of the shift has resulted from presidents, executive agencies, and courts seizing congressional prerogatives.

If it is a crime when a president steals Congress’s authority, then it is equally a crime when Congress fails to embrace its own authority; Christopher DeMuth continues:

But the most important part of the story has an opposite plot: Congress itself, despite its complaints about executive and judicial poaching, has been giving up its constitutional powers voluntarily and proactively.

It is a crime when the legislative branch fails to legislate; it is equally a crime when the Congress fails to organize taxation and outlays:

Congress has even handed off its constitutional crown jewels — its exclusive powers, assigned in Article I, Sections 8 and 9, to determine federal taxing and spending.

The details of the Constitution were crafted to keep the government bogged down in its own sluggishness — checks and balances countering each other, negotiating agreements between the branches of government — so that the government could not be efficient or effective in its regulation of human beings. If one of the purposes of government — indeed, the main purpose of government — is to protect individual freedom, then government should not be curtailing freedom.

Thursday, September 11, 2025

The United States Declares War in April 1917: Why?

The United States declared war on Germany in April 1917. President Woodrow Wilson had campaigned for his reelection by promising that he would continue to keep the United States out of the war. The election was close, and decided mainly by domestic issues rather than foreign policy. Wilson won by a thin margin. His messaging had been clear: he had not pushed America into WW1, and he would continue to avoid any U.S. entry into the war.

His message was false.

Prior to his reelection in November 1916, he foresaw that America would be in the war soon. Uncertain of an electoral victory, Wilson developed a contingency plan: should he lose the election, he’d appoint the president-elect to the office of Secretary of State, and then Wilson and his vice president would resign. The winner of the election would thus take office immediately, instead of waiting for an inauguration in March 1917.

Why did Wilson want this accelerated post-election timetable?

He was certain that the United States would be at war in 1917, and wanted a faster and smoother transition of power. Wilson secretly planned on being at war as a certainty. Publicly, he pledged to avoid U.S. involvement in the war.

Once Wilson had been inaugurated in March 1917 and had safely begun his second term, he rapidly moved forward with his plan for war. The Americans had expressed overwhelmingly their desire to remain at peace, so Wilson needed excuses and a propaganda campaign to persuade them to go to war.

Wilson argued that two factors necessitated a declaration of war against Germany: the German policy of unrestricted submarine warfare and the German efforts to encourage Mexico to declare war on the United States.

While these two factors appear at first glance to be reasonable, they exhibit weaknesses upon closer examination.

Germany had implemented a policy of unrestricted submarine warfare in 1915, paused the policy in 1916, and resumed in 1917. So 1917 was the second time that Germany carried out unrestricted submarine warfare. The first time, in 1915, Wilson did not see it as a cause for U.S. entry into the war. His stance in 1917 was inconsistent with his previous view that the U.S. should stay out of the war despite unrestricted submarine warfare.

In January 1917 the German government sent a telegram to its ambassador in Mexico with a message for the Mexican government. This message became known as the Zimmermann Telegram. Germany promised to fund a Mexican war against the United States, and at the end of that war, Mexico was to possess U.S. land. Wilson argued that Germany’s attempt to start a war between Mexico and the United States was a reason for the U.S. to declare war on Germany. The Zimmermann Telegram was not the first, and not the most significant, effort made by Germany to start a war between Mexico and the United States. In 1914, the Germans had sent a ship filled with weapons and ammunition for the Mexican government. In 1915, the Germans had given $12 million dollars to the Mexican government to fund military activity against the United States. The actions of the German government vis-a-vis Mexico in 1914 and 1915 were not seen by Wilson as reasons for declaring war, but in 1917 he presented the Zimmermann Telegram as a reason for war.

It is clear, then, that Wilson used submarine warfare and the Zimmermann Telegram as excuses for war. They were featured in his propaganda campaign. But what were the reasons for war?

Wilson had at least two reasons for declaring war.

First, the progressive movement in the U.S., of which Wilson was a part, saw the war as an opportunity: If the U.S. were a combatant, then the U.S. would be part of crafting peace treaties at the end of the war; these treaties would shape global diplomacy for years into the future. Wilson’s progressivism hoped to create new institutions and strengthen existing institutions in order to find peaceful solutions to diplomatic tensions and thereby avoid future wars. These institutions regulate relations and trade among nations; this was part of what was meant by the slogan, “Make the world safe for democracy.”

Second, the war would also be an opportunity for the government to argue that it needed extraordinary powers to intervene in the American economy and in society at large, because the war created an “emergency” situation. The progressive wing of the Democratic Party wanted these emergency powers to accelerate activities it had already begun. Some of these could be presented as part of the war effort, such as censorship of the press and the surveillance of individuals whose political loyalty was suspect. Other actions were clearly not related to the war, but carried out using emergency powers in spite of this: increasing racial segregation, reshaping educational institutions to conform to progressive ideas, rewriting housing policies, and generally regulating society. The draft was particularly appealing to the progressives, because it affirmed the government’s power to control the individual.

In addition to these two reasons, Wilson had a strong personal hatred for the Habsburg family, the ruling dynasty of Austria. One reason for this hatred was that Wilson saw the Habsburgs as opposed in some ways to his ideology: The Habsburg realm was a diverse, multiethnic territory; Wilson wanted a homogenous nation with one uniform culture. Other reasons for Wilson’s hatred toward the Habsburgs may be less rational and more emotional. Curiously, Wilson directed much less hate toward the Hohenzollerns, the ruling dynasty of Germany, even though the U.S. declared war on Germany.

Wilson was thus equipped with excuses which hid his reasons. After winning the November 1916 election with his campaign’s anti-war rhetoric, he promptly began to lobby energetically for the war.

Prior to winning reelection, he had been quite happy to profit from the war, as historians Allan Millett and Peter Maslowski write:

The American role in World War I derived its character less from strategic thinking in the United States than from the geopolitical notion that the future well-being of the United States depended upon the balance of power in Europe and the outcome of the war. Discarding the hallowed assumption that Europe’s affairs did not involve the United States and the security of the Western Hemisphere, the Wilson administration decided that the nation had a critical stake in an Allied victory. American involvement stemmed from economic self-interest as well as an emotional commitment to support “democracy” (France and Great Britain) against “autocracy” (Germany). After a brief economic dislocation when the war began in 1914, American bankers, farmers, industrialists, and producers of raw materials exploited British naval control of the Atlantic and Allied financial strength to make the war the biggest profit-making enterprise in the history of American exporting. Before American entry, the balance of trade, already favorable to the U.S., jumped by a factor of five; the Allies liquidated $2 billion of American assets and privately borrowed another $2.5 billion to pay for their purchases. In contrast, Germany secured only $45 million in American loans.

Because Wilson had spent the previous years proclaiming that he wanted to keep America out of the war, the U.S. military was not ready when war was declared in April 1917. By contrast, the U.S. industrial base was already partially on a war footing, because it had been producing and selling weapons, ammunition, and other war supplies to France and England.

The U.S. Navy had battleships and cruisers, but not enough destroyers. Shipping war materials and soldiers from America to Europe required destroyers to escort and protect the cargo ships. The task would be to build many destroyers quickly, as historian Russell Weigley notes:

For the kind of naval campaign in which it now found itself engaged, the United States also had built the wrong warships. The Navy should have had more destroyers. The Royal Navy had almost 300, but nearly 100 of them were busy screening the Grand Fleet. The United States had seventy, only forty-four of them relatively new oil-burning ships. It was not until early July, 1917, that as many as thirty-four American destroyers reached Queenstown to reinforce the British, and the rest of the American squadrons consisted mainly of the obsolescent types, which were retained in Western Hemisphere waters. Belatedly, battleship building was pushed aside for destroyers and smaller escort craft.

The role of the United States in WW1 was significant. Was it decisive? Responsible historians do not speculate about hypotheticals. There is no certainty in these counterfactuals, and great emphasis must be placed on the word ‘probably.’

If the United States hadn’t declared war, the war probably would have lasted significantly longer. Negotiations to end the war probably would have been more complex, because the two sides would probably have been of nearly equal strength. In reality, the Western Allies were significantly stronger after the United States declared war. It is not clear which side would have won if the U.S. had not entered the war.

The U.S. Army drafted more than four million young men in 1917. Approximately two million of them were transported to France, and approximately one million engaged in combat. Large numbers of American soldiers did not see combat until early 1918.

Aside from combat, U.S. Army engineers did significant work in laying railroad lines, building berths for ships, and setting up telephone systems.

The U.S. Navy was active, escorting convoys of ships across the Atlantic, and protecting those ships from submarine attacks.

The United States made a significant contribution to the war effort as it sold, and sometimes gave, war materials to its fellow Allies. The U.S. lent, and sometimes gave, vast sums of money to the Allies. Some of those loans were later forgiven.

In sum, the U.S. military influence on the course of the war began quite late in the course of the war, but its economic influence had been there from the beginning. The U.S. formally declared war, beginning its military participation, for reasons which were at the time not disclosed to the U.S. population.

Tuesday, February 4, 2025

Changes in American Attitudes Toward Alcohol and Their Unintended Consequences

Over centuries, American social thought about alcohol has developed significantly. To understand earlier phases of this process, it is first necessary to shed some stereotypes and cliches which still pervade historical images in the popular imagination.

Both the Pilgrims, who settled in southeastern Massachusetts around 1620, and the Puritans, who settled further north along the eastern coast of Massachusetts around 1630, cheerfully produced and consumed their own beer and wine. The conventional image of these two groups as opponents of alcohol is historically inaccurate.

More than a century later, George Washington oversaw the production of beer, wine, and distilled beverages, both at his home in Virginia, and at various army camps with his troops. Thomas Jefferson invested a great deal of thought and energy into growing specific breeds of grapes in order to make various types of wine. Samuel Adams was a maltster, making a key ingredient for beer.

Fermented apple cider was a popular beverage throughout North America.

In general, then, the area which was at first British colonies, and which was later the United States, had a culture which demonstrated no strong opposition to alcohol, and in which people of various social classes lost none of their respectability by consuming alcohol. This seems to have been the case for approximately two centuries.

There was very little legal regulation about who might consume alcohol, or where or when or how alcohol might be consumed. There was certainly some taxation of alcohol — hence the famous “whisky rebellion” in the early 1790s — but this taxation was for the purposes of raising revenue, and not for the purpose of changing social patterns of consumption.

This culture was also capable of clearly distinguishing between, on the one hand, the healthy and appropriate use of alcohol, and on the other hand, the excessive and unhealthy abuse of alcohol.

The fact that men who regularly enjoyed a glass of wine with supper were at the same time opposed to drunkenness was a fact so obvious that it did not need to be explained. A century later, however, that same fact was no longer obvious to many people, and required a great deal of explaining.

How and why did American society develop new attitudes toward alcohol?

One factor in this cultural shift was the distinction between various religious groups. The older and more established groups in North America were the Episcopalians (formerly Anglicans), the Lutherans, and the Roman Catholics. These groups had no objection to alcohol and forbade drunkenness.

Newer groups were represented initially and primarily by the Methodists. They argued for abstinence from alcohol in any form and in any amount. “Alcohol consumption,” writes historian Leah Rae Berk, “did not begin to decrease until the early 1830s,” indicating the era in which Methodist influence reached significant levels.

The Quakers and some branches of the Presbyterian Church also embraced the idea of abstaining from alcohol.

The anti-alcohol movement initially focused on distilled beverages, but eventually sought to eliminate all forms of alcohol.

From that point in time, it was less than a century until the passage of the eighteenth amendment to the Constitution prohibited the production, sale, and transport of almost all forms of alcohol in 1919. Leading up to that amendment was the growing Temperance Movement over the preceding century.

The goals of the Methodists and the Temperance Movement were clear: to reduce and eventually eliminate the consumption of alcohol.

Prior to the Prohibition Amendment, and after its repeal in 1933, the movement brought about incremental change in the forms of local and state laws. Such legislation limited when, where, and how alcohol could be produced, sold, or consumed, and who might consume it.

Their words and actions, however, were counterproductive. While the Temperance Movement was eventually successful in bringing about “blue laws” and finally Prohibition, it also set into motion the forbidden fruit effect.

The “forbidden fruit effect” is the desire for something which has been forbidden, precisely because it has been forbidden. In America during the late eighteenth century and early nineteenth century, alcohol was an unremarkable part of American life. Parents and grandparents often gave children small sips, or small cups, of beer or wine. There was no legal boundary — at age 18 or at age 21 — for purchasing or consuming. Public consumption was not noteworthy. Moderate consumption was as normal a part of daily life as eating bread. A glass of wine or beer at mealtime was so common that it was uninteresting.

When the Temperance Movement began to make itself felt, through regulations and especially through social and parental attitudes, alcohol became an object of fascination, especially for young people. Alcohol became desirable in the minds of young people because they were forbidden to have it. Possessing and consuming it became a goal.

The Temperance Movement created the exact thing which it hoped to avoid: binge drinking, increased drinking among the young, and a greater attraction to alcohol.

The social dynamic, especially in the form of parental attitudes, varied significantly across the various demographic segments of America. Parents who were very diligent to ensure that their children never drank alcohol, or at least never drank it until some arbitrary age, saw their children devise every scheme to obtain alcohol secretly. Such children were more likely to drink to excess, because they had never seen adults model moderate consumption. In places where a set age was culturally or legally enforced, it became a tradition to consume to excess on one’s birthday, having finally reached that age limit. By making it into a forbidden fruit, the movement had increased the focus on, and desire for, alcohol.

By contrast, parents who resisted the legal and social pressure, and who served their children small amounts of beer or wine at mealtimes, saw their children to be less likely to consume to excess, and generally less interested in alcohol.

Looking at the social and cultural development of North America, and especially that part of North America which would become the United States, there is a clear turning point: the seventeenth and eighteenth centuries were times in which alcohol consumption was unremarkable and moderate. The nineteenth and twentieth centuries saw the rise of an alcohol phobia, and an attendant effort to impose legal and cultural restrictions designed ultimately to eliminate alcohol. This effort not only failed, but produced an increased fascination with alcohol, especially among young people.

Friday, December 27, 2024

A Bumpy Start: The U.S. Economy Immediately after the Treaty of Paris

The American Revolutionary War ended for practical purposes in 1781 with the near-cessation of hostilities, but it ended officially in 1783 with the signing of the Treaty of Paris.

The treaty was finalized in September 1783. This gave the thirteen states, formerly the thirteen colonies, the final degree of certainty that they would now be recognized as sovereign. Markets are fond of reliable definitive conditions, and so the confirmation of American independence and autonomy were good for the American economy.

But there were negative factors under the surface of the initial economic enthusiasm: the thirteen states had accumulated debt in pursuing the war; private property had been destroyed in military action; there was not much capital for investment into new businesses; the currencies used as a means of payment were uncertain — individual colonies had printed their own paper money, denominated in pounds, shillings, and pence — and it was not clear if a shilling from Massachusetts was equal to a shilling from Virgina; coinage included a mix of Spanish dollars, colonial coins minted prior to the revolution under British auspices, and coins minted by the United States starting in 1783; counterfeit money was circulating.

The Continental Congress had begun printing money in 1775 and did so until 1779. Confusingly, one Continental dollar when issued was valued at 5 Georgia shillings, 6 Connecticut shillings, or 8 New York shillings. The value of the Continental dollars fell steadily. By 1780, they were valued at around one-fortieth of their face value.

Shortages of paper currency and coin caused instances of bartering or the use of “commodity money” — tobacco or desirable animal skins used as currency.

The British ended their blockade of the thirteen states when the treaty was signed in 1783. While this allowed an influx of goods into the new states, it dampened any domestic aspirations to start new businesses: competing with the British imports was difficult.

Under such circumstances, trade was sluggish.

The economic environment gave little motive for entrepreneurial expansion of business, as historian Ron Chernow writes:

After the Revolution, New York experienced a brief flush of prosperity that faded and then vanished in 1785, snuffed out by swelling debt, scarce money, and dwindling trade. Falling prices hurt indebted farmers, forcing them to repay loans with dearer money. As a Bank of New York director, Hamilton worried that defaulting debtors would also feign poverty and ruin their creditors. He later said of the deteriorating business climate, “confidence in pecuniary transactions had been destroyed and the springs of industry had been proportionably relaxed.”

Having only recently gained independence and sovereignty, the thirteen new states were facing a grave economic threat. The conditions were not the same in all thirteen, but the common elements were an oversupply of dubious confusing paper currency and large amounts of debt.

Although there was some improvement in economic conditions by 1786, the structural problems of the economy remained: the thirteen states were relying on large quantities of imported goods, and the revenue from exports had fallen from its pre-Revolution levels.

Ultimately, the solution to the economic problems would be the solution to the political problems. The government created by the “Articles of Confederation” was not up to the task of creating the stable environment needed for economic growth. Britain and France controlled much of the trading in and around the Americas and across the Atlantic.

The call for a new Constitution, written in 1787 and ratified in 1789, would create a political climate which promoted entrepreneurship. The political question was an economic question. The result would be a more unified economy, founded on a more unified form of government, as Ron Chernow notes:

With the possible exception of James Madison, nobody had exerted more influence than Hamilton in bringing about the convention or a greater influence afterward in securing passage of its sterling product. His behavior at the convention itself was another matter. It would long seem contradictory — and, to Jeffersonians, downright suspicious — that Hamilton could support a document that he had contested at such length. In fact, the Constitution represented a glorious compromise for every signer. This flexibility has always been honored as a sign of political maturity, whereas Hamilton’s concessions have often been given a conspiratorial twist. For the rest of his life, Hamilton remained utterly true to his pledge that he would do everything in his power to see the Constitution successfully implemented. He never wavered either in public or in private. And there was a great deal in the document that was compatible with ideas about government that he had expressed since 1780. His reservations had less to do with the powers of the new government than with the tenure of the people exercising them. In the end, nobody would do more than Alexander Hamilton to infuse life into this parchment and make it the working mandate of the American government.

The new Constitution provided for a national currency, which reduced the confusion of using everything from Spanish dollars to pre-Revolutionary local coinage to the flood of paper currency generated by the individual states: one standard coin and currency had a unifying effect on the economy.

Likewise, the new government stipulated, in the Constitution, that there be no internal taxes or tariffs on interstate commerce. This energized trade.

After the end of the Revolutionary War, the U.S. showed great economic potential, but also faced great economic obstacles. Many of those obstacles would be removed by the new Constitution.

The United States economy was on a more solid footing after the ratification in 1789. To be sure, there were new obstacles to be overcome. The British began to interfere more with American trans-Atlantic trade precisely because the U.S. economy was doing better. This was one factor leading to the war of 1812.

Wednesday, June 26, 2024

When the Depression Became the Great Depression: Going from Bad to Worse

The reader will know that the stock market crash of October 1929 is associated with the Great Depression. But how is it associated? Was it the cause of the Great Depression? Or was it a symptom of what was already going to happen — a sort of leading indicator?

For a century, economists and historians have debated those questions, without arriving at conclusive answers. They’ve also asked these kinds of questions: When did the Great Depression end? If the stock market crash caused it, was it the only cause? If the stock market crash didn’t cause it, what might have been the cause or causes? Could the Great Depression have been avoided?

While few definitive explanations have emerged, some hypotheses seem generally to be more plausible than others, e.g., it is now widely accepted that the stock market crash did not cause the Great Depression, but rather was a reflection of a nervousness or an awareness of some troubling economic trend in the making. The stock market functions primarily as a barometer of investor psychology. Stock prices go up when people have optimistic expectations. Stock prices go down when people have grim forebodings.

Another generally endorsed hypothesis is that, whatever the cause or causes of the Great Depression may have been, the depression didn’t have to be great. It could have been merely an ordinary depression, not a great one.

The Depression went from being merely a depression to being the Great Depression because of government intervention in the economy. Economies organically seek equilibrium. An event or situation, like a depression, which takes an economy out of equilibrium, will trigger the economy to rearrange itself in order to work its way back to equilibrium. When governments take action to fix ailing economies, these actions, despite their good intentions, get in the way of the natural process of returning to equilibrium.

The reader will be aware of President Roosevelt’s New Deal programs, a mixture of massive government spending, massive tax increases, and massive increases in government debt. While intended as a way to help the economy, FDR’s New Deal prevented the economy’s mechanisms from automatically compensating for any deviations from equilibrium and from thereby bringing the economy back to balance, as historian Ben Shapiro writes:

According to Professors Harold Cole and Lee Ohanian of UCLA’s Department of Economics, FDR’s policies prolonged the depression by at least seven years.

FDR tried and abandoned different strategies in quick succession. But all of his strategies shared a common element: the assumption that the government should intervene in the economy, rather than stand back and let the economy sort itself out. At one point Roosevelt persuaded many manufacturing companies to give their workers an outrageous 25% raise; in return, those companies were given permission to raise their prices substantially. Here was the core of the problem: the government should have no say in how much people are paid; it should have no say in which prices manufacturers charge for their products. The catastrophic results of FDR’s wage and prices controls were predictable, as Shapiro explains:

Not surprisingly, wages were 25 percent above market level, but unemployment was also 25 percent higher than it should have been. Demand stalled because of artificial boosts in prices.

Professor Ohanian clarifies why wage and price controls lead only to more problems:

High wages and high prices in an economic slump run contrary to everything we know about market forces in economic downturns, as we’ve seen in the past several years, salaries and prices fall when unemployment is high. By artificially inflating both, the New Deal policies short-circuited the market's self-correcting forces.

Likewise, Professor Cole describes how the economy’s self-correcting mechanisms are stymied when the government tries to correct the problems:

President Roosevelt believed that excessive competition was responsible for the Depression by reducing prices and wages, and by extension reducing employment and demand for goods and services. So he came up with a recovery package that would be unimaginable today, allowing businesses in every industry to collude without the threat of antitrust prosecution and workers to demand salaries about 25 percent above where they ought to have been, given market forces. The economy was poised for a beautiful recovery, but that recovery was stalled by these misguided policies.

What drove FDR’s economic decision-making? Henry Morgenthau was one of FDR’s close personal friends; Morgenthau became friends with Roosevelt long before either of them entered politics, and twenty years before Roosevelt became president. Not only was Morgenthau Roosevelt’s friend until the latter died in 1945, he was also appointed by Roosevelt to a series of government positions, culminating in his appointment as Secretary of the Treasury by Roosevelt. He remained in that post for over a decade during Roosevelt’s presidency.

Despite good political and personal relationships with Roosevelt, Morgenthau described FDR as essentially uninformed about economics. During one of his political campaigns, FDR bragged about his education, saying “I took economics courses in college for four years.” The registrar at Harvard, however, revealed this to be untrue.

Accounts provided by a number of Roosevelt’s friends and appointees confirm that he often chose arbitrary numbers and used them to set economic policy, as Ben Shapiro reports:

FDR’s own economic ignorance is legendary. According to historian Amity Shlaes, FDR used to tinker with the price of gold arbitrarily. At one point, he raised the price of gold by 21 cents because he said it was a “lucky number, because it’s three times seven.” Henry Morgenthau, part of FDR’s brain trust, said later, “If anybody knew how we really set the gold price through a combination of lucky numbers, etc., I think they would be frightened.”

It remains plausible that there were few or no coherent systematic underpinnings for FDR’s economic policies, and that those policies did more harm than good, preventing what would have been a small depression from self-correcting. The New Deal policies made a short-term depression into the Great Depression, causing it to last longer and have more extreme impacts than it otherwise would have had, as Shapiro describes:

FDR’s policies greatly lengthened the Depression and made it far worse than it otherwise had to be.

It may be taken as an axiom that government actions in the economy — regulating, taxing, creating a national debt — prevent the economy’s own organic self-correcting mechanisms from doing what they do best: keeping the economy at a prosperous equilibrium point.

Monday, June 24, 2024

Jim Crow Governments Shackle Free Enterprise: Regulating Businesses Empowers Racism

Anyone familiar with the painful struggle for civil rights in the United States has read about “Jim Crow Laws.” What were these laws? They were regulations which enforced various forms of segregation and discrimination.

But why were they “laws”? They were the actions of a powerful government: a government powerful enough to impose regulations on where people lived, where they worked, where they shopped, and where they ate. They are examples of what the abolitionists hoped to avoid when they developed the concepts of a “limited government” and a “weak government” — those abolitionists who were agitating to end slavery already during the 1700s.

During the late 1800s and early 1900s, “segregation was imposed governmentally,” in the words of historian Ben Shapiro. It was not a social or cultural desire. It had to be imposed precisely because society and culture would not voluntarily go along with it.

It was especially necessary for governments to impose segregation on businesses. In the world of buying and selling, racial prejudice makes no sense. A business is not interested in the color of a person’s skin; it is interested in a person’s money. A consumer is not interested in a manufacturer’s gene pool; she or he is interested in the quality and price of a product.

Because a “free market” economy is intrinsically anti-racist, racists needed the government to control the businesses. If the country had a weak and limited government, it would not have been able to enforce a racist agenda. Progress toward justice and toward civil rights is the search for a weak government.

Free and unregulated markets are economies in which customers and businesses are free to make choices. In situations in which the government did not force businesses to segregate by means of Jim Crow Laws, they were already desegregating even before any civil rights legislation was enacted, as historian Ben Shapiro writes:

In February 1960, four black students in Greensboro, North Carolina, sat down at the counter at Woolworth’s. This was four years before the Civil Rights Act. By July 1960, Woolworth’s lunch counter desegregated itself, after losing $200,000. The market worked.

Racists have an affinity toward strong controlling governments: with such power, the racists can force segregation on society. Anti-racists have a desire for a weak and limited government: under such governments, businesses are free to buy and sell for motives of profit instead of motives of race.

“The bottom line is that racists cannot trust free markets to racially discriminate,” writes economist Walter Williams. “Racists need the force of government to have success.”

Williams goes on to report that “from the 1880s into the 1960s” it was not business, but government, that “enforced some form of segregation through what were known as Jim Crow laws.”

If a business ever acts in a racist manner, it usually is because the government forces it to do so. Business don’t often want to act in a racist way, because racial calculations don’t usually maximize profits.

Those few businesses which act in racist ways usually pay the price. For example, Lester Maddox owned and operated a restaurant in Atlanta, Georgia. He would not allow any African-Americans into the restaurant as customers. When three Black people walked up to the restaurant in April 1964 and asked to be seated, he responded by brandishing an ax handle, implying his willingness to use violence. Lester Maddox’s racist ways were not profitable, and soon he and his restaurant were out of business. Meanwhile, other restaurants in Atlanta who happily served any paying customer continued profitably. Although a failure in business, Lester Maddox was rewarded for his racist actions: the Democratic Party chose him as its leader and as governor of Georgia.

The example of Maddox is the example of a business whose racist ways of operating do not optimize profit; the business suffers as a result. By contrast, the Montgomery Bus Boycott shows how an anti-racist company, National City Lines (NCL), was forced by the government to act in racist ways. NCL was a private company which was hired by various cities to operate bus systems in those cities. NCL had been hired by Montgomery, Alabama to run the city’s buses. But the city government imposed a restriction on NCL. It insisted that the buses be segregated.

After examining the “sit in” actions at segregated lunch counters, Ben Shapiro looks at the bus example:

Then there’s the Montgomery bus boycott. In 1955, city ordinances required segregation on buses. Rosa Parks and the NAACP organized a massive boycott that resulted in 40,000 black people refusing to take the buses the day after Parks’ famous refusal to move to the back of the bus. The only reason that the bus company refused to abide by the demands of the boycotters is that they were in negotiations with the city, and the city ordinances prevented them from doing so.

Not only did the government of Montgomery inflict segregation on the passengers of the buses, but it regulated the bus company, forcing it to segregate, and thereby forcing it to act in a way which did not optimize profits. Left to their own devices, businesses will desegregate, because segregation is not the most profitable option. Businesses will segregate only when governments force them to do so, as Shapiro explains:

The market is better at uprooting such discrimination than the government is without invading the rights of private business owners to choose their clientele.

The real estate sector provides a clear example. Readers will know that the term “redlining” refers to the practice of marking some neighborhoods as “off limits” to Black homebuyers. This practice was often established by means of “covenants” in real estate deeds. Racists were able to keep African-American home-buyers out of neighborhoods only because the government enforced these real estate covenants. If the government were a limited government, which allowed free market real estate transactions, then it would not have been powerful enough to keep Black people out of these neighborhoods.

Freed from the restraints imposed by Jim Crow Laws, real estate agents and home sellers would have sold houses to African-Americans. Those who sell real estate seek only to sell to the highest bidder; sellers have no interest in skin color or gene pools.

The most powerful tool to promote justice and to advance civil rights is an unregulated business environment. When buyers and sellers are free to simply look for the “best deal,” then racism is quickly ignored in favor of profit.

Racism without a connection to a strong government is a nasty, evil, and toothless sentiment. It is ugly, but also relatively powerless. Racism in the presence of a strong government is empowered to inflict harm, pain, and suffering. When a society ordains a limited government, instead of powerful government, racism is prevented from having concrete effects.

Sunday, June 23, 2024

Why Did It Take So Long? The Abolition of Slavery in the United States

There is no simple explanation for the history of slavery in the United States. From 1607, the time of the first permanent settlement in what would become the thirteen colonies and later the thirteen states, to 1863, the year of Lincoln’s Emancipation Proclamation, there is no easy narrative to decipher the events in North America. Rather, there is a complex series of occurrences.

And when a coherent unifying narrative is formed incorporating all of those occurrences, new data are discovered, demanding the formation of a yet more complicated narrative.

In 1652, not even half a century after Jamestown’s founding, the Rhode Island legislative body outlawed slavery in that colony. This achievement was the result of abolitionists, including Roger Williams, who had begun agitating for the abolition of slavery in Rhode Island in 1636. In the same year that this legislation was passed, 1652, Samuel Sewall was born, who carried the abolitionist agenda forward, this time in Massachusetts, authoring anti-slavery texts.

To call the anti-slavery agitators in the 1600s ‘abolitionists’ is somewhat anachronistic, because the word at that time was not often so used. Yet in substance they were exactly that.

Here is, then, a great mystery: Given the vigorous start which the abolitionist movement had by the mid 1600s, and given that more than half the population in each of the thirteen colonies, later thirteen states, was opposed to slavery, why did slavery persist for so long?

Even in the slave states, a majority of the population was not enthusiastic about slavery. Slaveholders and their sycophants defended the institution energetically, but they were less than half of the population in the slaveholding states. While the minority in those states enjoyed an economic advantage from slavery, the free majority understood slavery as undermining economic opportunities. Free men who did not own slaves, but who lived in slaveholding states, saw their income driven downward by the institution of slavery.

Yet slavery persisted.

The slaveholders were perhaps so deftly able to defend slavery because they had disproportionately large economic resources, they mastered the skills of political and legal maneuvering, and they did not eschew the use of violence in pursuing their goals.

A majority of the “founders wanted to abolish slavery,” as Ben Shapiro notes. The slaveholders and their supporters, despite being a distinct minority, found procedural ways to coerce the remainder of the new nation into allowing slavery, as Shapiro writes:

From its founding, the United States attempted to come to grips with slavery and phase it out. The state of Vermont was the first sovereign state to abolish slavery, in 1777. During the debate over the Declaration of Independence, Thomas Jefferson wanted to include a provision that would have condemned King George III for “wag[ing] cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere, or to incur miserable death in their transportation hither.” Southern states demanded that this provision be removed in return for joining the revolution. Having no choice, Jefferson removed the clause.

By the time the new Constitution was written in 1787, the situation was still the same. The minority percentage of pro-slavery citizens in the United States refused the majority’s wish that the new government do away with slavery entirely. The abolitionists nonetheless found a way to weaken the pro-slavery bloc: the “three-fifths” clause.

This clause has been debated and misunderstood for over two centuries. The abolitionists refused to give the slaveholding states a one-for-one representation for their slaves in Congress. Why should a slaveholding state have a greater representation in Congress than a free state, when the slaves were not allowed to vote? Should the size of a state’s representation be based on the number of people in that state, or on the number of free people? If a state with slaves were to obtain a larger representation by including the number of slaves in the calculation, then the pro-slavery bloc would have an overwhelming and undefeatable hold on Congress, and slavery could never be abolished. Only by reducing the representation of the slaveholding states could the abolitionist cause find a foothold in the legislative process.

By not reducing the formula to zero, the new Constitution also created an inherent structural instability, a conceptual disequilibrium, which would guarantee that the issue of abolitionism would never go away. It would continually resurface until the matter was resolved once and for all.

John Brown, along with many of his family, was part of an abolitionist network which included David Hudson and a young Ulyses S. Grant.

In 1859, John Brown was instrumental in nudging the abolitionist movement away from its pacifist leanings. If the slaveholders were willing to use violence to defend slavery, John Brown and David Hudson reasoned, then the abolitionists and slaves together might use violence to end slavery, as historian Franklin Benjamin Sanborn wrote in 1878:

Old Squire Hudson, for whom the town so-called in Ohio was named, and who was the leading man in that section where Brown spent his boyhood, was not only an abolitionist fifty years ago, but that he favored forcible resistance by the slaves.

So it was, then, that over two centuries’ worth of abolitionism culminated in the Abraham Lincoln’s Emancipation Proclamation, and arduous but successful work of implementing that proclamation, along with the three amendments to the Constitution between 1865 and 1870, during the last two years of the Civil War and during the Reconstruction Era.