Author: Sooemrei ( Emre Altun )

  • War

    War is an armed conflict between the armed forces of states, or between governmental forces and armed groups that are organized under a certain command structure and have the capacity to sustain military operations, or between such organized groups.

    It is generally characterized by widespread violence, destruction, and mortality, using regular or irregular military forces. Warfare refers to the common activities and characteristics of types of war, or of wars in general.

    Total war is warfare that is not restricted to purely legitimate military targets, and can result in massive civilian or other non-combatant suffering and casualties.

    The English word war derives from the 11th-century Old English words wyrre and werre, from Old French werre (guerre as in modern French), in turn from the Frankish *werra, ultimately deriving from the Proto-Germanic *werzō ‘mixture, confusion’. The word is related to the Old Saxon werran, Old High German werran, and the modern German verwirren, meaning ‘to confuse, to perplex, to bring into confusion’.

    Anthropologists disagree about whether warfare was common throughout human prehistory, or whether it was a more recent development, following the invention of agriculture or organised states. It is difficult to determine whether warfare occurred during the Paleolithic due to the sparseness of known remains. Some sources claim that most Middle and Upper Paleolithic societies were possibly fundamentally egalitaria and may have rarely or never engaged in organized violence between groups (i.e. war). Evidence of violent conflict appears to increase during the Mesolithic period, from around 10,000 years ago onwards.

    Raymond Case Kelly, a cultural anthropologist and ethnologist from the US, claimed that before 400,000 years ago, groups of people clashed like groups of chimpanzees, however, later they preferred “positive and peaceful social relations between neighboring groups, such as joint hunting, trading, and courtship.” In his book “Warless Societies and the Origin of War” he explores the origins of modern wars and states that high surplus product encourages conflict, so “raiding often begins in the richest environments”.

    In War Before Civilization, Lawrence H. Keeley, a professor at the University of Illinois, says approximately 90–95% of known societies throughout history engaged in at least occasional warfare, and many fought constantly. Keeley describes several styles of primitive combat such as small raids, large raids, and massacres. All of these forms of warfare were used by primitive societies, a finding supported by other researchers. Keeley explains that early war raids were not well organized, as the participants did not have any formal training. Scarcity of resources meant defensive works were not a cost-effective way to protect the society against enemy raids. William Rubinstein wrote “Pre-literate societies, even those organized in a relatively advanced way, were renowned for their studied cruelty.’”

    Since the rise of the state some 5,000 years ago, military activity has continued over much of the globe. In Europe the oldest known battlefield is thought to date to 1250 BC. The Bronze Age has been described as a key period in the intensification of warfare, with the emergence of dedicated warriors and the development of metal weapons like swords. Two other commonly named periods of increase are the Axial Age and Modern Times. The invention of gunpowder, and its eventual use in warfare, together with the acceleration of technological advances have fomented major changes to war itself.

    In Coercion, Capital, and European States, AD 990–1992, Charles Tilly, professor of history, sociology, and social science at the University of Michigan and the Columbia University, described as “the founding father of 21st-century sociology” argued that ‘War made the state, and the state made war,’ saying that wars have led to creation of states which in their turn perpetuate war. Tilly’s theory of state formation is considered dominant in the state formation literature.

    Since 1945, great power wars, interstate wars, territorial conquests and war declarations have declined in frequency. Wars have been increasingly regulated by international humanitarian law. Battle deaths and casualties have declined, in part due to advances in military medicine and despite advances in weapons. In Western Europe, since the late 18th century, more than 150 conflicts and about 600 battles have taken place, but no battle has taken place since 1945.

    However, war in some aspects has not necessarily declined. Civil wars have increased in absolute terms since 1945. A distinctive feature of war since 1945 is that combat has largely been a matter of civil wars and insurgencies. The number of civil wars declined since 1991.

    Asymmetric warfare is the methods used in conflicts between belligerents of drastically different levels of military capability or size.

    Biological warfare, or germ warfare, is the use of biological infectious agents or toxins such as bacteria, viruses, and fungi against people, plants, or animals. This can be conducted through sophisticated technologies, like cluster munitions, or with rudimentary techniques like catapulting an infected corpse behind enemy lines, and can include weaponized or non-weaponized pathogens.

    Chemical warfare involves the use of weaponized chemicals in combat. Poison gas as a chemical weapon was principally used during World War I, and resulted in over a million estimated casualties, including more than 100,000 civilians.

    Cold warfare is an intense international rivalry without direct military conflict, but with a sustained threat of it, including high levels of military preparations, expenditures, and development, and may involve active conflicts by indirect means, such as economic warfare, political warfare, covert operations, espionage, cyberwarfare, or proxy wars.

    Conventional warfare is a form of warfare between states in which nuclear, biological, chemical or radiological weapons are not used or see limited deployment.

    Cyberwarfare involves the actions by a nation-state or international organization to attack and attempt to damage another nation’s information systems.

    Insurgency is a rebellion against authority, where irregular forces take up arms to change an existing political order. An insurgency can be fought via counterinsurgency, and may also be opposed by measures to protect the population, and by political and economic actions of various kinds aimed at undermining the insurgents’ claims against the incumbent regime.

    Information warfare is the application of destructive force on a large scale against information assets and systems, against the computers and networks that support the four critical infrastructures (the power grid, communications, financial, and transportation).

    Nuclear warfare is warfare in which nuclear weapons are the primary, or a major, method of achieving capitulation.
    Radiological warfare is any form of warfare involving deliberate radiation poisoning or contamination of an area with radiological sources.

    Total war is warfare by any means possible, disregarding the laws of war, placing no limits on legitimate military targets, using weapons and tactics resulting in significant civilian casualties, or demanding a war effort requiring significant sacrifices by the friendly civilian population.

    Unconventional warfare can be defined as “military and quasi-military operations other than conventional warfare” and may use covert forces or actions such as subversion, diversion, sabotage, espionage, biowarfare, sanctions, propaganda or guerrilla warfare.

    Entities contemplating going to war and entities considering whether to end a war may formulate war aims as an evaluation/propaganda tool. War aims may stand as a proxy for national-military resolve.

    Fried defines war aims as “the desired territorial, economic, military or other benefits expected following successful conclusion of a war”.

    Tangible/intangible aims:

    Tangible war aims may involve (for example) the acquisition of territory (as in the German goal of Lebensraum in the first half of the 20th century) or the recognition of economic concessions (as in the Anglo-Dutch Wars).

    Intangible war aims – like the accumulation of credibility or reputation – may have more tangible expression (“conquest restores prestige, annexation increases power”).

    Explicit/implicit aims:

    Explicit war aims may involve published policy decisions.
    Implicit war aims can take the form of minutes of discussion, memoranda and instructions.
    Positive/negative aims:

    “Positive war aims” cover tangible outcomes.
    “Negative war aims” forestall or prevent undesired outcomes.

    War aims can change in the course of conflict and may eventually morph into “peace conditions” – the minimal conditions under which a state may cease to wage a particular war.

    Estimates for total deaths due to war vary widely. In one estimate, primitive warfare from 50,000 to 3000 BCE has been thought to have claimed 400 million±133,000 victims based on the assumption that it accounted for the 15.1% of all deaths. Ian Morris estimated that the rate could be as high as 20%. Other scholars find the prehistoric percentage much lower, around 2%, similar to the Neanderthals and ancestors of apes and primates.

    For the period 3000 BCE until 1991, estimates range from 151 million to several billion. The lowest estimate for history of 151 million was calculated by William Eckhardt. He explained his method as summing the recorded casualties and multiplying their average by the number of recorded battles or wars. This method excludes indirect deaths for premodern wars and all deaths for unrecorded wars. Few premodern wars were recorded beyond Eurasia and only 18 wars were recorded for period 3000 – 1500 BC worldwide. Later researches shifted from Eckhardt’s approach to general estimations of the percentage of population killed by wars. Azar Gat and Ian Morris both give the lowest estimate of 1% for history including all the 20th century, or about 1 billion. The highest estimates of both scholars exceed the famous “hoax” of 3,640,000,000 people killed in wars which circulated decades in scholarly literature in various countries. Gat gives 5%, or about 5 billion. Morris gives for the 20th century 2%, for 1400-1900 3% in Europe and “slightly higher” elsewhere, 5% for the ancient empires in 500 BC – AD 200, 10% for the rest of history and 20% for prehistory. His total for history is thus about 9 billion.

    The deadliest war in history, in terms of the cumulative number of deaths since its start, is World War II, from 1939 to 1945, with 70–85 million deaths, followed by the Mongol conquests at up to 60 million. As concerns a belligerent’s losses in proportion to its prewar population, the most destructive war in modern history may have been the Paraguayan War (see Paraguayan War casualties). In 2013 war resulted in 31,000 deaths, down from 72,000 deaths in 1990.

    War usually results in significant deterioration of infrastructure and the ecosystem, a decrease in social spending, famine, large-scale emigration from the war zone, and often the mistreatment of prisoners of war or civilians. For instance, of the nine million people who were on the territory of the Byelorussian SSR in 1941, some 1.6 million were killed by the Germans in actions away from battlefields, including about 700,000 prisoners of war, 500,000 Jews, and 320,000 people counted as partisans (the vast majority of whom were unarmed civilians). Another byproduct of some wars is the prevalence of propaganda by some or all parties in the conflict, and increased revenues by weapons manufacturers.

    Three of the ten most costly wars, in terms of loss of life, have been waged in the last century. These are the two World Wars, followed by the Second Sino-Japanese War (which is sometimes considered part of World War II, or as overlapping). Most of the others involved China or neighboring peoples. The death toll of World War II, being over 60 million, surpasses all other war-death-tolls.

    Military personnel subject to combat in war often suffer mental and physical injuries, including depression, posttraumatic stress disorder, disease, injury, and death.

    In every war in which American soldiers have fought in, the chances of becoming a psychiatric casualty – of being debilitated for some period of time as a consequence of the stresses of military life – were greater than the chances of being killed by enemy fire.

    —No More Heroes, Richard Gabriel

    Swank and Marchand’s World War II study found that after sixty days of continuous combat, 98% of all surviving military personnel will become psychiatric casualties. Psychiatric casualties manifest themselves in fatigue cases, confusional states, conversion hysteria, anxiety, obsessional and compulsive states, and character disorders.

    One-tenth of mobilised American men were hospitalised for mental disturbances between 1942 and 1945, and after thirty-five days of uninterrupted combat, 98% of them manifested psychiatric disturbances in varying degrees.

    —14–18: Understanding the Great War, Stéphane Audoin-Rouzeau, Annette Becker

    Additionally, it has been estimated anywhere from 18% to 54% of Vietnam war veterans suffered from posttraumatic stress disorder.

    Based on 1860 census figures, 8% of all white American males aged 13 to 43 died in the American Civil War, including about 6% in the North and approximately 18% in the South. The war remains the deadliest conflict in American history, resulting in the deaths of 620,000 military personnel. United States military casualties of war since 1775 have totaled over two million. Of the 60 million European military personnel who were mobilized in World War I, 8 million were killed, 7 million were permanently disabled, and 15 million were seriously injured.

    During Napoleon’s retreat from Moscow, more French military personnel died of typhus than were killed by the Russians. Of the 450,000 soldiers who crossed the Neman on 25 June 1812, less than 40,000 returned. More military personnel were killed from 1500 to 1914 by typhus than from military action. In addition, if it were not for modern medical advances there would be thousands more dead from disease and infection. For instance, during the Seven Years’ War, the Royal Navy reported it conscripted 184,899 sailors, of whom 133,708 (72%) died of disease or were ‘missing’. It is estimated that between 1985 and 1994, 378,000 people per year died due to war.

    Most wars have resulted in significant loss of life, along with destruction of infrastructure and resources (which may lead to famine, disease, and death in the civilian population). During the Thirty Years’ War in Europe, the population of the Holy Roman Empire was reduced by 15 to 40 percent. Civilians in war zones may also be subject to war atrocities such as genocide, while survivors may suffer the psychological aftereffects of witnessing the destruction of war. War also results in lower quality of life and worse health outcomes. A medium-sized conflict with about 2,500 battle deaths reduces civilian life expectancy by one year and increases infant mortality by 10% and malnutrition by 3.3%. Additionally, about 1.8% of the population loses access to drinking water.

    Most estimates of World War II casualties indicate around 60 million people died, 40 million of whom were civilians. Deaths in the Soviet Union were around 27 million. Since a high proportion of those killed were young men who had not yet fathered any children, population growth in the postwar Soviet Union was much lower than it otherwise would have been.

    Once a war has ended, losing nations are sometimes required to pay war reparations to the victorious nations. In certain cases, land is ceded to the victorious nations. For example, the territory of Alsace-Lorraine has been traded between France and Germany on three different occasions.

    Typically, war becomes intertwined with the economy and many wars are partially or entirely based on economic reasons. The common view among economic historians is that the Great Depression ended with the advent of World War II. Many economists believe that government spending on the war caused or at least accelerated recovery from the Great Depression, though some consider that it did not play a very large role in the recovery, though it did help in reducing unemployment. In most cases, such as the wars of Louis XIV, the Franco-Prussian War, and World War I, warfare primarily results in damage to the economy of the countries involved. For example, Russia’s involvement in World War I took such a toll on the Russian economy that it almost collapsed and greatly contributed to the start of the Russian Revolution of 1917.

    World War II was the most financially costly conflict in history; its belligerents cumulatively spent about a trillion U.S. dollars on the war effort (as adjusted to 1940 prices). The Great Depression of the 1930s ended as nations increased their production of war materials.

    By the end of the war, 70% of European industrial infrastructure was destroyed. Property damage in the Soviet Union inflicted by the Axis invasion was estimated at a value of 679 billion rubles. The combined damage consisted of complete or partial destruction of 1,710 cities and towns, 70,000 villages/hamlets, 2,508 church buildings, 31,850 industrial establishments, 40,000 mi (64,374 km) of railroad, 4100 railroad stations, 40,000 hospitals, 84,000 schools, and 43,000 public libraries.

    There are many theories about the motivations for war, but no consensus about which are most common. Military theorist Carl von Clausewitz said, “Every age has its own kind of war, its own limiting conditions, and its own peculiar preconceptions.”

    Dutch psychoanalyst Joost Meerloo held that, “War is often…a mass discharge of accumulated internal rage (where)…the inner fears of mankind are discharged in mass destruction.” Other psychoanalysts such as E.F.M. Durban and John Bowlby have argued human beings are inherently violent. This aggressiveness is fueled by displacement and projection where a person transfers his or her grievances into bias and hatred against other races, religions, nations or ideologies. By this theory, the nation state preserves order in the local society while creating an outlet for aggression through warfare.

    The Italian psychoanalyst Franco Fornari, a follower of Melanie Klein, thought war was the paranoid or projective “elaboration” of mourning. Fornari thought war and violence develop out of our “love need”: our wish to preserve and defend the sacred object to which we are attached, namely our early mother and our fusion with her. For the adult, nations are the sacred objects that generate warfare. Fornari focused upon sacrifice as the essence of war: the astonishing willingness of human beings to die for their country, to give over their bodies to their nation.

    Despite Fornari’s theory that man’s altruistic desire for self-sacrifice for a noble cause is a contributing factor towards war, few wars have originated from a desire for war among the general populace. Far more often the general population has been reluctantly drawn into war by its rulers. One psychological theory that looks at the leaders is advanced by Maurice Walsh. He argues the general populace is more neutral towards war and wars occur when leaders with a psychologically abnormal disregard for human life are placed into power. War is caused by leaders who seek war such as Napoleon and Hitler. Such leaders most often come to power in times of crisis when the populace opts for a decisive leader, who then leads the nation to war.

    Naturally, the common people don’t want war; neither in Russia nor in England nor in America, nor for that matter in Germany. That is understood. But, after all, it is the leaders of the country who determine the policy and it is always a simple matter to drag the people along, whether it is a democracy or a fascist dictatorship or a Parliament or a Communist dictatorship. … the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.

    — Hermann Göring at the Nuremberg trials, 18 April 1946

    Several theories concern the evolutionary origins of warfare. There are two main schools: One sees organized warfare as emerging in or after the Mesolithic as a result of complex social organization and greater population density and competition over resources; the other sees human warfare as a more ancient practice derived from common animal tendencies, such as territoriality and sexual competition.

    The latter school argues that since warlike behavior patterns are found in many primate species such as chimpanzees, as well as in many ant species, group conflict may be a general feature of animal social behavior. Some proponents of the idea argue that war, while innate, has been intensified greatly by developments of technology and social organization such as weaponry and states.

    Psychologist and linguist Steven Pinker argued that war-related behaviors may have been naturally selected in the ancestral environment due to the benefits of victory. He also argued that in order to have credible deterrence against other groups (as well as on an individual level), it was important to have a reputation for retaliation, causing humans to develop instincts for revenge as well as for protecting a group’s (or an individual’s) reputation (“honor”).

    Crofoot and Wrangham have argued that warfare, if defined as group interactions in which “coalitions attempt to aggressively dominate or kill members of other groups”, is a characteristic of most human societies. Those in which it has been lacking “tend to be societies that were politically dominated by their neighbors”.

    Ashley Montagu strongly denied universalistic instinctual arguments, arguing that social factors and childhood socialization are important in determining the nature and presence of warfare. Thus, he argues, warfare is not a universal human occurrence and appears to have been a historical invention, associated with certain types of human societies. Montagu’s argument is supported by ethnographic research conducted in societies where the concept of aggression seems to be entirely absent, e.g. the Chewong and Semai of the Malay peninsula. Bobbi S. Low has observed correlation between warfare and education, noting societies where warfare is commonplace encourage their children to be more aggressive.

    War can be seen as a growth of economic competition in a competitive international system. In this view wars begin as a pursuit of markets for natural resources and for wealth. War has also been linked to economic development by economic historians and development economists studying state-building and fiscal capacity. While this theory has been applied to many conflicts, such counter arguments become less valid as the increasing mobility of capital and information level the distributions of wealth worldwide, or when considering that it is relative, not absolute, wealth differences that may fuel wars. There are those on the extreme right of the political spectrum who provide support, fascists in particular, by asserting a natural right of a strong nation to whatever the weak cannot hold by force. Some centrist, capitalist, world leaders, including Presidents of the United States and U.S. Generals, expressed support for an economic view of war.

    The Marxist theory of war is quasi-economic in that it states all modern wars are caused by competition for resources and markets between great (imperialist) powers, claiming these wars are a natural result of capitalism. Marxist economists Karl Kautsky, Rosa Luxemburg, Rudolf Hilferding and Vladimir Lenin theorized that imperialism was the result of capitalist countries needing new markets. Expansion of the means of production is only possible if there is a corresponding growth in consumer demand. Since the workers in a capitalist economy would be unable to fill the demand, producers must expand into non-capitalist markets to find consumers for their goods, hence driving imperialism.

    Demographic theories can be grouped into two classes, Malthusian and youth bulge theories:

    Malthusian theories see expanding population and scarce resources as a source of violent conflict. Pope Urban II in 1095, on the eve of the First Crusade, advocating Crusade as a solution to European overpopulation, said:

    For this land which you now inhabit, shut in on all sides by the sea and the mountain peaks, is too narrow for your large population; it scarcely furnishes food enough for its cultivators. Hence it is that you murder and devour one another, that you wage wars, and that many among you perish in civil strife. Let hatred, therefore, depart from among you; let your quarrels end. Enter upon the road to the Holy Sepulchre; wrest that land from a wicked race, and subject it to yourselves.

    This is one of the earliest expressions of what has come to be called the Malthusian theory of war, in which wars are caused by expanding populations and limited resources. Thomas Malthus (1766–1834) wrote that populations always increase until they are limited by war, disease, or famine. The violent herder–farmer conflicts in Nigeria, Mali, Sudan and other countries in the Sahel region have been exacerbated by land degradation and population growth.

    According to Heinsohn, who proposed youth bulge theory in its most generalized form, a youth bulge occurs when 30 to 40 percent of the males of a nation belong to the “fighting age” cohorts from 15 to 29 years of age. It will follow periods with total fertility rates as high as 4–8 children per woman with a 15–29-year delay. Heinsohn saw both past “Christianist” European colonialism and imperialism, as well as today’s Islamist civil unrest and terrorism as results of high birth rates producing youth bulges.

    Among prominent historical events that have been attributed to youth bulges are the role played by the historically large youth cohorts in the rebellion and revolution waves of early modern Europe, including the French Revolution of 1789, and the effect of economic depression upon the largest German youth cohorts ever in explaining the rise of Nazism in Germany in the 1930s. The 1994 Rwandan genocide has also been analyzed as following a massive youth bulge. Youth bulge theory has been subjected to statistical analysis by the World Bank, Population Action International, and the Berlin Institute for Population and Development. Youth bulge theories have been criticized as leading to racial, gender and age discrimination.

    Geoffrey Parker argues that what distinguishes the “Western way of war” based in Western Europe chiefly allows historians to explain its extraordinary success in conquering most of the world after 1500:

    The Western way of war rests upon five principal foundations: technology, discipline, a highly aggressive military tradition, a remarkable capacity to innovate and to respond rapidly to the innovation of others and – from about 1500 onward – a unique system of war finance. The combination of all five provided a formula for military success….The outcome of wars has been determined less by technology, then by better war plans, the achievement of surprise, greater economic strength, and above all superior discipline.

    Parker argues that Western armies were stronger because they emphasized discipline, that is, “the ability of a formation to stand fast in the face of the enemy, where they’re attacking or being attacked, without giving way to the natural impulse of fear and panic.” Discipline came from drills and marching in formation, target practice, and creating small “artificial kinship groups: such as the company and the platoon, to enhance psychological cohesion and combat efficiency.

    Rationalism is an international relations theory or framework. Rationalism (and Neorealism (international relations)) operate under the assumption that states or international actors are rational, seek the best possible outcomes for themselves, and desire to avoid the costs of war. Under one game theory approach, rationalist theories posit all actors can bargain, would be better off if war did not occur, and likewise seek to understand why war nonetheless reoccurs. Under another rationalist game theory without bargaining, the peace war game, optimal strategies can still be found that depend upon number of iterations played. In “Rationalist Explanations for War”, James Fearon examined three rationalist explanations for why some countries engage in war:

    Issue indivisibilities

    Incentives to misrepresent or information asymmetry

    Commitment problems

    “Issue indivisibility” occurs when the two parties cannot avoid war by bargaining, because the thing over which they are fighting cannot be shared between them, but only owned entirely by one side or the other. “Information asymmetry with incentives to misrepresent” occurs when two countries have secrets about their individual capabilities, and do not agree on either: who would win a war between them, or the magnitude of state’s victory or loss. For instance, Geoffrey Blainey argues that war is a result of miscalculation of strength. He cites historical examples of war and demonstrates, “war is usually the outcome of a diplomatic crisis which cannot be solved because both sides have conflicting estimates of their bargaining power.” Thirdly, bargaining may fail due to the states’ inability to make credible commitments.

    Within the rationalist tradition, some theorists have suggested that individuals engaged in war suffer a normal level of cognitive bias, but are still “as rational as you and me”. According to philosopher Iain King, “Most instigators of conflict overrate their chances of success, while most participants underrate their chances of injury….” King asserts that “Most catastrophic military decisions are rooted in groupthink” which is faulty, but still rational. The rationalist theory focused around bargaining, which is currently under debate. The Iraq War proved to be an anomaly that undercuts the validity of applying rationalist theory to some wars.

    The statistical analysis of war was pioneered by Lewis Fry Richardson following World War I. More recent databases of wars and armed conflict have been assembled by the Correlates of War Project, Peter Brecke and the Uppsala Conflict Data Program. The following subsections consider causes of war from system, societal, and individual levels of analysis. This kind of division was first proposed by Kenneth Waltz in Man, the State, and War and has been often used by political scientists since then.

    There are several different international relations theory schools. Supporters of realism in international relations argue that the motivation of states is the quest for security, and conflicts can arise from the inability to distinguish defense from offense, which is called the security dilemma.

    Within the realist school as represented by scholars such as Henry Kissinger and Hans Morgenthau, and the neorealist school represented by scholars such as Kenneth Waltz and John Mearsheimer, two main sub-theories are:

    Balance of power theory: States have the goal of preventing a single state from becoming a hegemon, and war is the result of the would-be hegemon’s persistent attempts at power acquisition. In this view, an international system with more equal distribution of power is more stable, and “movements toward unipolarity are destabilizing.” However, evidence has shown power polarity is not actually a major factor in the occurrence of wars.

    Power transition theory: Hegemons impose stabilizing conditions on the world order, but they eventually decline, and war occurs when a declining hegemon is challenged by another rising power or aims to pre-emptively suppress them. On this view, unlike for balance-of-power theory, wars become more probable when power is more equally distributed. This “power preponderance” hypothesis has empirical support.

    The two theories are not mutually exclusive and may be used to explain disparate events according to the circumstance. Liberalism as it relates to international relations emphasizes factors such as trade, and its role in disincentivizing conflict which will damage economic relations. Critics respond that military force may sometimes be at least as effective as trade at achieving economic benefits, especially historically if not as much today. Furthermore, trade relations which result in a high level of dependency may escalate tensions and lead to conflict. Empirical data on the relationship of trade to peace are mixed, and moreover, some evidence suggests countries at war do not necessarily trade less with each other.

    Diversionary theory, also known as the “scapegoat hypothesis”, suggests the politically powerful may use war to as a diversion or to rally domestic popular support. This is supported by literature showing out-group hostility enhances in-group bonding, and a significant domestic “rally effect” has been demonstrated when conflicts begin. However, studies examining the increased use of force as a function of need for internal political support are more mixed. U.S. war-time presidential popularity surveys taken during the presidencies of several recent U.S. leaders have supported diversionary theory.

    These theories suggest differences in people’s personalities, decision-making, emotions, belief systems, and biases are important in determining whether conflicts get out of hand. For instance, it has been proposed that conflict is modulated by bounded rationality and various cognitive biases, such as prospect theory.

    The morality of war has been the subject of debate for thousands of years.

    The two principal aspects of ethics in war, according to the just war theory, are jus ad bellum and jus in bello.

    Jus ad bellum (right to war), dictates which unfriendly acts and circumstances justify a proper authority in declaring war on another nation. There are six main criteria for the declaration of a just war: first, any just war must be declared by a lawful authority; second, it must be a just and righteous cause, with sufficient gravity to merit large-scale violence; third, the just belligerent must have rightful intentions – namely, that they seek to advance good and curtail evil; fourth, a just belligerent must have a reasonable chance of success; fifth, the war must be a last resort; and sixth, the ends being sought must be proportional to means being used.

    Jus in bello (right in war), is the set of ethical rules when conducting war. The two main principles are proportionality and discrimination. Proportionality regards how much force is necessary and morally appropriate to the ends being sought and the injustice suffered. The principle of discrimination determines who are the legitimate targets in a war, and specifically makes a separation between combatants, who it is permissible to kill, and non-combatants, who it is not. Failure to follow these rules can result in the loss of legitimacy for the just-war-belligerent.

    The just war theory was foundational in the creation of the United Nations and in international law’s regulations on legitimate war.

    Lewis Coser, an American conflict theorist and sociologist, argued that conflict provides a function and a process whereby a succession of new equilibriums are created. Thus, the struggle of opposing forces, rather than being disruptive, may be a means of balancing and maintaining a social structure or society.

    Religious groups have long formally opposed or sought to limit war as in the Second Vatican Council document Gaudiem et Spes: “Any act of war aimed indiscriminately at the destruction of entire cities of extensive areas along with their population is a crime against God and man himself. It merits unequivocal and unhesitating condemnation.”

    Anti-war movements have existed for every major war in the 20th century, including, most prominently, World War I, World War II, and the Vietnam War. In the 21st century, worldwide anti-war movements occurred in response to the United States invasion of Afghanistan and Iraq. Protests opposing the War in Afghanistan occurred in Europe, Asia, and the United States.

    During a war, the parties may agree to pauses. A ceasefire is a stoppage of a war in which each side agrees with the other to suspend aggressive actions often due to mediation by a third party. Ceasefires may be declared as part of a formal treaty but also as part of an informal understanding between opposing forces. A ceasefire can be temporary with an intended end date or may be intended to last indefinitely. A ceasefire is distinct from an armistice in that the armistice is a formal end to a war whereas a ceasefire may be a temporary stoppage.

    The immediate goal of a ceasefire is to stop violence but the underlying purposes of ceasefires vary. Ceasefires may be intended to meet short-term limited needs (such as providing humanitarian aid), manage a conflict to make it less devastating, or advance efforts to peacefully resolve a dispute. An actor may not always intend for a ceasefire to advance the peaceful resolution of a conflict but instead give the actor an upper hand in the conflict (for example, by re-arming and repositioning forces or attacking an unsuspecting adversary), which creates bargaining problems that may make ceasefires less likely to be implemented and less likely to be durable if implemented.

    The durability of ceasefire agreements is affected by several factors, such as demilitarized zones, withdrawal of troops and third-party guarantees and monitoring (e.g. peacekeeping). Ceasefire agreements are more likely to be durable when they reduce incentives to attack, reduce uncertainty about the adversary’s intentions, and when mechanisms are put in place to prevent accidents from spiraling into conflict.

  • International organization

    An international organization, also known as an intergovernmental organization or an international institution, is an organization that is established by a treaty or other type of instrument governed by international law and possesses its own legal personality, such as the United Nations, the Council of Europe, African Union, Mercosur and BRICS. International organizations are composed of primarily member states, but may also include other entities, such as other international organizations, firms, and nongovernmental organizations. Additionally, entities (including states) may hold observer status.

    Examples for international organizations include: UN General Assembly, World Trade Organization, African Development Bank, UN Economic and Social Council, UN Security Council, Asian Development Bank, International Bank for Reconstruction and Development, International Monetary Fund, International Finance Corporation, Inter-American Development Bank, United Nations Environment Programme.

    Scottish law professor James Lorimer has been credited with coining the term “international organization” in a 1871 article in the Revue de Droit International et de Legislation Compare. Lorimer use the term frequently in his two-volume Institutes of the Law of Nations (1883, 1884). Other early uses of the term were by law professor Walther Schucking in works published in 1907, 1908 and 1909, and by political science professor Paul S. Reinsch in 1911. In 1935, Pitman B. Potter defined international organization as “an association or union of nations established or recognized by them for the purpose of realizing a common end”. He distinguished between bilateral and multilateral organizations on one end and customary or conventional organizations on the other end. In his 1922 book An Introduction to the Study of International Organization, Potter argued that international organization was distinct from “international intercourse” (all relations between states), “international law” (which lacks enforcement) and world government.

    International Organizations are sometimes referred to as intergovernmental organizations (IGOs), to clarify the distinction from international non-governmental organizations (INGOs), which are non-governmental organizations (NGOs) that operate internationally. These include international nonprofit organizations such as the World Organization of the Scout Movement, International Committee of the Red Cross and Médecins Sans Frontières, as well as lobby groups that represent the interests of multinational corporations.

    IGOs are established by a treaty that acts as a charter creating the group. Treaties are formed when lawful representatives (governments) of several states go through a ratification process, providing the IGO with an international legal personality. Intergovernmental organizations are an important aspect of public international law.

    Intergovernmental organizations in a legal sense should be distinguished from simple groupings or coalitions of states, such as the G7 or the Quartet. Such groups or associations have not been founded by a constituent document and exist only as task groups. Intergovernmental organizations must also be distinguished from treaties. Many treaties (such as the North American Free Trade Agreement, or the General Agreement on Tariffs and Trade before the establishment of the World Trade Organization) do not establish an independent secretariat and instead rely on the parties for their administration, for example by setting up a joint committee. Other treaties have established an administrative apparatus which was not deemed to have been granted binding legal authority. The broader concept wherein relations among three or more states are organized according to certain principles they hold in common is multilateralism.

    Intergovernmental organizations differ in function, membership, and membership criteria. They have various goals and scopes, often outlined in the treaty or charter. Some IGOs developed to fulfill a need for a neutral forum for debate or negotiation to resolve disputes. Others developed to carry out mutual interests with unified aims to preserve peace through conflict resolution and better international relations, promote international cooperation on matters such as environmental protection, to promote human rights, to promote social development (education, health care), to render humanitarian aid, and to economic development. Some are more general in scope (the United Nations) while others may have subject-specific missions (such as INTERPOL or the International Telecommunication Union and other standards organizations). Common types include:

    Worldwide or global organizations – generally open to nations worldwide as long as certain criteria are met: This category includes the United Nations (UN) and its specialized agencies, the World Health Organization, the International Telecommunication Union (ITU), the World Bank, and the International Monetary Fund (IMF). It also includes globally operating intergovernmental organizations that are not an agency of the UN, including for example: the Hague Conference on Private International Law, an operating intergovernmental organization based in The Hague that pursues the progressive unification of private international law; the International Criminal Court that adjudicates crimes defined under the Rome Statute; and the CGIAR (formerly the Consultative Group for International Agricultural Research), a global partnership that unites intergovernmental organizations engaged in research for a food-secured future.

    Cultural, linguistic, ethnic, religious, or historical organizations – open to members based on some cultural, linguistic, ethnic, religious, or historical link. Examples include the Commonwealth of Nations, Arab League, Organisation internationale de la Francophonie, Community of Portuguese Language Countries, Organization of Turkic States, International Organization of Turkic Culture, Organisation of Islamic Cooperation, and Commonwealth of Independent States (CIS).

    Economic organizations – based on macro-economic policy goals: Some are dedicated to free trade and reduction of trade barriers, e.g. World Trade Organization, International Monetary Fund. Others are focused on international development. International cartels, such as OPEC, also exist. The Organisation for Economic Co-operation and Development (OECD) was founded as an economic-policy-focused organization. An example of a recently formed economic IGO is the Bank of the South.

    Educational organizations – centered around tertiary-level study. EUCLID University was chartered as a university and umbrella organization dedicated to sustainable development in signatory countries. The United Nations has founded multiple universities, notably the United Nations University and the University for Peace, for research and education around issues relevant to the UN, such as peace and sustainable development. The United Nations also has a dedicated training arm: the United Nations Institute for Training and Research (UNITAR).
    Health and Population Organizations – based on common perceived health and population goals. These are formed to address those challenges collectively, for example, the intergovernmental partnership for population and development Partners in Population and Development.
    Regional organizations – open to members from a particular continent or other specific region of the world. This category includes the Community of Latin American and Caribbean States (CLACS), Council of Europe (CoE), European Union (EU), Eurasian Economic Union (EAEU), Energy Community, North Atlantic Treaty Organization (NATO), Economic Community of West African States (ECOWAS), Organization for Security and Co-operation in Europe (OSCE), African Union (AU), Organization of American States (OAS), Association of Caribbean States (ACS), Association of Southeast Asian Nations (ASEAN), Islamic Development Bank, Union of South American Nations, Asia Cooperation Dialogue (ACD), Pacific Islands Forum, South Asian Association for Regional Cooperation (SAARC), Asian-African Legal Consultative Organization (AALCO) and the Organisation of Eastern Caribbean States (OECS).

    In regional organizations like the European Union, African Union, NATO, ASEAN and Mercosur, there are restrictions on membership due to factors such as geography or political regimes. To enter the European Union (EU), the states require different criteria; member states need to be European, liberal-democratic political system, and be a capitalist economy.

    The oldest regional organization is the Central Commission for Navigation on the Rhine, created in 1815 by the Congress of Vienna.

    There are several different reasons a state may choose membership in an intergovernmental organization. But there are also reasons membership may be rejected.

    Reasons for participation:

    Economic rewards: In the case of the North American Free Trade Agreement (NAFTA), membership in the free trade agreement benefits the parties’ economies. For example, Mexican companies are given better access to U.S. markets due to their membership. External actors can also contribute to economic rewards and fuel the attractiveness of IGOs – notably for developing countries. For example, external donor funding from the European Union to IGOs in the Global South.
    Political influence: Smaller countries, such as Portugal and Belgium, who do not carry much political clout on the international stage, are given a substantial increase in influence through membership in IGOs such as the European Union. Also for countries with more influence such as France and Germany, IGOs are beneficial as the nation increases influence in the smaller countries’ internal affairs and expanding other nations dependence on themselves, so to preserve allegiance.

    Security: Membership in an IGO such as NATO gives security benefits to member countries. This provides an arena where political differences can be resolved.
    Democracy: It has been noted that member countries experience a greater degree of democracy and those democracies survive longer.

    Reasons for rejecting membership:

    Loss of sovereignty: Membership often comes with a loss of state sovereignty as treaties are signed that require co-operation on the part of all member states.
    Insufficient benefits: Often membership does not bring about substantial enough benefit to warrant membership in the organization.

    Attractive external options: Bilateral co-operation with external actors or competing IGOs may provide more attractive (external) policy options for member states. Thus, powerful external actors may undermine existing IGOs.

    Intergovernmental organizations are provided with privileges and immunities that are intended to ensure their independent and effective functioning. They are specified in the treaties that give rise to the organization (such as the Convention on the Privileges and Immunities of the United Nations and the Agreement on the Privileges and Immunities of the International Criminal Court), which are normally supplemented by further multinational agreements and national regulations (for example the International Organizations Immunities Act in the United States). The organizations are thereby immune from the jurisdiction of national courts. Certain privileges and immunities are also specified in the Vienna Convention on the Representation of States in their Relations with International Organizations of a Universal Character of 1975, which however has so far not been signed by 35 states and is thus not yet in force (status: 2022).

    Rather than by national jurisdiction, legal accountability is intended to be ensured by legal mechanisms that are internal to the intergovernmental organization itself and access to administrative tribunals. In the course of many court cases where private parties tried to pursue claims against international organizations, there has been a gradual realization that alternative means of dispute settlement are required as states have fundamental human rights obligations to provide plaintiffs with access to court in view of their right to a fair trial. Otherwise, the organizations’ immunities may be put in question in national and international courts. Some organizations hold proceedings before tribunals relating to their organization to be confidential, and in some instances have threatened disciplinary action should an employee disclose any of the relevant information. Such confidentiality has been criticized as a lack of transparency.

    The immunities also extend to employment law. In this regard, immunity from national jurisdiction necessitates that reasonable alternative means are available to effectively protect employees’ rights; in this context, a first instance Dutch court considered an estimated duration of proceedings before the Administrative Tribunal of the International Labour Organization of 15 years to be too long. An international organization does not pay taxes, is difficult to prosecute in court and is not obliged to provide information to any parliament.

    The United Nations focuses on five main areas: “maintaining peace and security, protecting human rights, delivering humanitarian aid, supporting sustainable development, and upholding international law”. UN agencies, such as UN Relief and Works Agency, are generally regarded as international organizations in their own right. Additionally, the United Nations has Specialized Agencies, which are organizations within the United Nations System that have their member states (often nearly identical to the UN Member States) and are governed independently by them; examples include international organizations that predate the UN, such as the International Telecommunication Union, and the Universal Postal Union, as well as organizations that were created after the UN such as the World Health Organization (which was made up of regional organizations such as PAHO that predated the UN). A few UN special agencies are very centralized in policy and decision-making, but some are decentralized; for example, the country-based projects or missions’ directors and managers can decide what they want to do in the fields.

    The UN agencies have a variety of tasks based on their specialization and their interests. The UN agencies provide different kinds of assistance to low-income countries and middle-income countries, and this assistance would be a good resource for developmental projects in developing countries. The UN has to protect against any kind of human rights violation, and in the UN system, some specialized agencies, like ILO and United Nations High Commissioner for Refugees (UNHCR), work in the human rights’ protection fields. The UN agency, ILO, is trying to end any kind of discrimination in the work field and child labor; after that, this agency promotes fundamental labor rights and to get safe and secure for the laborers. United Nations Environment Program(UNEP) is one of the UN’s (United Nations) agencies and is an international organization that coordinates U.N. activities on the environment.

    An early prominent example of an international organization is the Congress of Vienna of 1814–1815, which was an international diplomatic conference to reconstitute the European political order after the downfall of the French Emperor Napoleon. States then became the main decision makers who preferred to maintain their sovereignty as of 1648 at the Westphalian treaty that closed the 30 Years’ War in Europe.

    The first and oldest international organization—being established employing a treaty, and creating a permanent secretariat, with a global membership—was the International Telecommunication Union (founded in 1865). The first general international organization—addressing a variety of issues—was the League of Nations, founded on 10 January 1920 with a principal mission of maintaining world peace after World War I. The United Nations followed this model after World War II. This was signed on 26 June 1945, in San Francisco, at the conclusion of the United Nations Conference on International Organization, and came into force on 24 October 1945. Currently, the UN is the main IGO with its arms such as the United Nations Security Council (UNSC), the General Assembly (UNGA), the International Court of Justice (ICJ), the Secretariat (UNSA), the Trusteeship Council (UNTC) and the Economic and Social Council (ECOSOC).

    When defined as “organizations with at least three state parties, a permanent headquarters or secretariat, as well as regular meetings and budgets”, the number of IGOs in the world increased from about 60 in 1940 to about 350 in 1980, after which it has remained roughly constant.

  • Shakespare Meaning

    Poet the equal of leading Arab in recital

  • State (polity)

    A state is a political entity that regulates society and the population within a definite territory. Government is considered to form the fundamental apparatus of contemporary states.

    A country often has a single state, with various administrative divisions. A state may be a unitary state or some type of federal union; in the latter type, the term “state” is sometimes used to refer to the federated polities that make up the federation, and they may have some of the attributes of a sovereign state, except being under their federation and without the same capacity to act internationally. (Other terms that are used in such federal systems may include “province”, “region” or other terms.)

    For most of prehistory, people lived in stateless societies. The earliest forms of states arose about 5,500 years ago. Over time societies became more stratified and developed institutions leading to centralised governments. These gained state capacity in conjunction with the growth of cities, which was often dependent on climate and economic development, with centralisation often spurred on by insecurity and territorial competition.

    Over time, varied forms of states developed, that used many different justifications for their existence (such as divine right, the theory of the social contract, etc.). Today, the modern nation state is the predominant form of state to which people are subject. Sovereign states have sovereignty; any ingroup’s claim to have a state faces some practical limits via the degree to which other states recognize them as such. Satellite states are states that have de facto sovereignty but are often indirectly controlled by another state.

    Definitions of a state are disputed. According to sociologist Max Weber, a “state” is a polity that maintains a monopoly on the legitimate use of violence, although other definitions are common. Absence of a state does not preclude the existence of a society, such as stateless societies like the Haudenosaunee Confederacy that “do not have either purely or even primarily political institutions or roles”. The degree and extent of governance of a state is used to determine whether it has failed.

    The word state and its cognates in some other European languages (stato in Italian, estado in Spanish and Portuguese, état in French, Staat in German and Dutch) ultimately derive from the Latin word status, meaning “condition, circumstances”. Latin status derives from stare, “to stand”, or remain or be permanent, thus providing the sacred or magical connotation of the political entity.

    The English noun state in the generic sense “condition, circumstances” predates the political sense. It was introduced to Middle English b.c.1200 both from Old French and directly from Latin.

    With the revival of the Roman law in 14th-century Europe, the term came to refer to the legal standing of persons (such as the various “estates of the realm” – noble, common, and clerical), and in particular the special status of the king. The highest estates, generally those with the most wealth and social rank, were those that held power. The word also had associations with Roman ideas (dating back to Cicero) about the “status rei publicae”, the “condition of public matters”. In time, the word lost its reference to particular social groups and became associated with the legal order of the entire society and the apparatus of its enforcement.

    The early 16th-century works of Machiavelli (especially The Prince) played a central role in popularizing the use of the word “state” in something similar to its modern sense. The contrasting of church and state still dates to the 16th century. The North American colonies were called “states” as early as the 1630s.The expression “L’État, c’est moi” (“I am the State”) attributed to Louis XIV, although probably apocryphal, is recorded in the late 18th century.

    There is no academic consensus on the definition of the state. The term “state” refers to a set of different, but interrelated and often overlapping, theories about a certain range of political phenomena. According to Walter Scheidel, mainstream definitions of the state have the following in common: “centralized institutions that impose rules, and back them up by force, over a territorially circumscribed population; a distinction between the rulers and the ruled; and an element of autonomy, stability, and differentiation. These distinguish the state from less stable forms of organization, such as the exercise of chiefly power.”

    The most commonly used definition is by Max Weber who describes the state as a compulsory political organization with a centralized government that maintains a monopoly of the legitimate use of force within a certain territory. Weber writes that the state “is a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.”

    While defining a state, it is important not to confuse it with a nation; an error that occurs frequently in common discussion. A state refers to a political unit with sovereignty over a given territory. While a state is more of a “political-legal abstraction,” the definition of a nation is more concerned with political identity and cultural or historical factors. Importantly, nations do not possess the organizational characteristics like geographic boundaries or authority figures and officials that states do. Additionally, a nation does not have a claim to a monopoly on the legitimate use of force over their populace, while a state does, as Weber indicated. An example of the instability that arises when a state does not have a monopoly on the use of force can be seen in African states which remain weak due to the lack of war which European states relied on. A state should not be confused with a government; a government is an organization that has been granted the authority to act on the behalf of a state. Nor should a state be confused with a society; a society refers to all organized groups, movements, and individuals who are independent of the state and seek to remain out of its influence.

    Neuberger offers a slightly different definition of the state with respect to the nation: the state is “a primordial, essential, and permanent expression of the genius of a specific (nation).”

    The definition of a state is also dependent on how and why they form. The contractarian view of the state suggests that states form because people can all benefit from cooperation with others and that without a state there would be chaos. The contractarian view focuses more on the alignment and conflict of interests between individuals in a state. On the other hand, the predatory view of the state focuses on the potential mismatch between the interests of the people and the interests of the state. Charles Tilly goes so far to say that states “resemble a form of organized crime and should be viewed as extortion rackets.” He argued that the state sells protection from itself and raises the question about why people should trust a state when they cannot trust one another.

    Tilly defines states as “coercion-wielding organisations that are distinct from households and kinship groups and exercise a clear priority in some respects over all other organizations within substantial territories.” Tilly includes city-states, theocracies and empires in his definition along with nation-states, but excludes tribes, lineages, firms and churches. According to Tilly, states can be seen in the archaeological record as of 6000 BC; in Europe they appeared around 990, but became particularly prominent after 1490. Tilly defines a state’s “essential minimal activities” as:

    1. War making – “eliminating or neutralizing their outside rivals”
    2. State making – “eliminating or neutralizing their rivals inside their own territory”
    3. Protection – “eliminating or neutralizing the enemies of their clients”
    4. Extraction – “acquiring the means of carrying out the first three activities”
    5. Adjudication – “authoritative settlement of disputes among members of the population”
    6. Distribution – “intervention in the allocation of goods among the members of the population”
    7. Production – “control of the creation and transformation of goods and services produced by the population”

    Importantly, Tilly makes the case that war is an essential part of state-making; that wars create states and vice versa.

    Modern academic definitions of the state frequently include the criterion that a state has to be recognized as such by the international community.

    Liberal thought provides another possible teleology of the state. According to John Locke, the goal of the state or commonwealth is “the preservation of property” (Second Treatise on Government), with ‘property’ in Locke’s work referring not only to personal possessions but also to one’s life and liberty. On this account, the state provides the basis for social cohesion and productivity, creating incentives for wealth-creation by providing guarantees of protection for one’s life, liberty and personal property. Provision of public goods is considered by some such as Adam Smith as a central function of the state, since these goods would otherwise be underprovided. Tilly has challenged narratives of the state as being the result of a societal contract or provision of services in a free market – he characterizes the state more akin as a protection racket in the vein of organized crime.

    While economic and political philosophers have contested the monopolistic tendency of states, Robert Nozick argues that the use of force naturally tends towards monopoly.

    Another commonly accepted definition of the state is the one given at the Montevideo Convention on the Rights and Duties of States in 1933. It provides that “[t]he state as a person of international law should possess the following qualifications: (a) a permanent population; (b) a defined territory; (c) government; and (d) capacity to enter into relations with the other states.” And that “(t)he federal state shall constitute a sole person in the eyes of international law.”

    Confounding the definition problem is that “state” and “government” are often used as synonyms in common conversation and even some academic discourse. According to this definition schema, the states are nonphysical persons of international law, and governments are organizations of people. The relationship between a government and its state is one of representation and authorized agency.

    Charles Tilly distinguished between empires, theocracies, city-states and nation-states. According to Michael Mann, the four persistent types of state activities are:

    1. Maintenance of internal order
    2. Military defence and aggression
    3. Maintenance of communications infrastructure
    4. Economic redistribution

    Josep Colomer distinguished between empires and states in the following way:

    1. Empires were vastly larger than states
    2. Empires lacked fixed or permanent boundaries whereas a state had fixed boundaries
    3. Empires had a “compound of diverse groups and territorial units with asymmetric links with the center” whereas a state had “supreme authority over a territory and population”
    4. Empires had multi-level, overlapping jurisdictions whereas a state sought monopoly and homogenization

    According to Michael Hechter and William Brustein, the modern state was differentiated from “leagues of independent cities, empires, federations held together by loose central control, and theocratic federations” by four characteristics:

    1. The modern state sought and achieved territorial expansion and consolidation
    2. The modern state achieved unprecedented control over social, economic, and cultural activities within its boundaries
    3. The modern state established ruling institutions that were separate from other institutions
    4. The ruler of the modern state was far better at monopolizing the means of violence

    States may be classified by political philosophers as sovereign if they are not dependent on, or subject to any other power or state. Other states are subject to external sovereignty or hegemony where ultimate sovereignty lies in another state. Many states are federated states which participate in a federal union. A federated state is a territorial and constitutional community forming part of a federation. (Compare confederacies or confederations such as Switzerland.) Such states differ from sovereign states in that they have transferred a portion of their sovereign powers to a federal government.

    One can commonly and sometimes readily (but not necessarily usefully) classify states according to their apparent make-up or focus. The concept of the nation-state, theoretically or ideally co-terminous with a “nation”, became very popular by the 20th century in Europe, but occurred rarely elsewhere or at other times. In contrast, some states have sought to make a virtue of their multi-ethnic or multinational character (Habsburg Austria-Hungary, for example, or the Soviet Union), and have emphasised unifying characteristics such as autocracy, monarchical legitimacy, or ideology. Other states, often fascist or authoritarian ones, promoted state-sanctioned notions of racial superiority. Other states may bring ideas of commonality and inclusiveness to the fore: note the res publica of ancient Rome and the Rzeczpospolita of Poland-Lithuania which finds echoes in the modern-day republic. The concept of temple states centred on religious shrines occurs in some discussions of the ancient world. Relatively small city-states, once a relatively common and often successful form of polity, have become rarer and comparatively less prominent in modern times. Modern-day independent city-states include Vatican City, Monaco, and Singapore. Other city-states survive as federated states, like the present day German city-states, or as otherwise autonomous entities with limited sovereignty, like Hong Kong, Gibraltar and Ceuta. To some extent, urban secession, the creation of a new city-state (sovereign or federated), continues to be discussed in the early 21st century in cities such as London.

    A state can be distinguished from a government. The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time. That is, governments are the means through which state power is employed. States are served by a continuous succession of different governments. States are immaterial and nonphysical social objects, whereas governments are groups of people with certain coercive powers.

    Each successive government is composed of a specialized and privileged body of individuals, who monopolize political decision-making and are separated by status and organization from the population as a whole.

    States can also be distinguished from the concept of a “nation”, where “nation” refers to a cultural-political community of people. A nation-state refers to a situation where a single ethnicity is associated with a specific state.

    In the classical thought, the state was identified with both political society and civil society as a form of political community, while the modern thought distinguished the nation state as a political society from civil society as a form of economic society.

    Thus in the modern thought the state is contrasted with civil society.

    Antonio Gramsci believed that civil society is the primary locus of political activity because it is where all forms of “identity formation, ideological struggle, the activities of intellectuals, and the construction of hegemony take place.” and that civil society was the nexus connecting the economic and political sphere. Arising out of the collective actions of civil society is what Gramsci calls “political society”, which Gramsci differentiates from the notion of the state as a polity. He stated that politics was not a “one-way process of political management” but, rather, that the activities of civil organizations conditioned the activities of political parties and state institutions, and were conditioned by them in turn. Louis Althusser argued that civil organizations such as church, schools, and the family are part of an “ideological state apparatus” which complements the “repressive state apparatus” (such as police and military) in reproducing social relations.

    Jürgen Habermas spoke of a public sphere that was distinct from both the economic and political sphere.

    Given the role that many social groups have in the development of public policy and the extensive connections between state bureaucracies and other institutions, it has become increasingly difficult to identify the boundaries of the state. Privatization, nationalization, and the creation of new regulatory bodies also change the boundaries of the state in relation to society. Often the nature of quasi-autonomous organizations is unclear, generating debate among political scientists on whether they are part of the state or civil society. Some political scientists thus prefer to speak of policy networks and decentralized governance in modern societies rather than of state bureaucracies and direct state control over policy.

    The earliest forms of the state emerged whenever it became possible to centralize power in a durable way. Agriculture and a settled population have been attributed as necessary conditions to form states. Certain types of agriculture are more conducive to state formation, such as grain (wheat, barley, millet), because they are suited to concentrated production, taxation, and storage. Agriculture and writing are almost everywhere associated with this process: agriculture because it allowed for the emergence of a social class of people who did not have to spend most of their time providing for their own subsistence, and writing (or an equivalent of writing, like Inca quipus) because it made possible the centralization of vital information. Bureaucratization made expansion over large territories possible.

    The first known states were created in Egypt, Mesopotamia, India, China, Mesoamerica, and the Andes. It is only in relatively modern times that states have almost completely displaced alternative “stateless” forms of political organization of societies all over the planet. Roving bands of hunter-gatherers and even fairly sizable and complex tribal societies based on herding or agriculture have existed without any full-time specialized state organization, and these “stateless” forms of political organization have in fact prevailed for all of the prehistory and much of human history and civilization.

    The primary competing organizational forms to the state were religious organizations (such as the Church), and city republics.

    Since the late 19th century, virtually the entirety of the world’s inhabitable land has been parcelled up into areas with more or less definite borders claimed by various states. Earlier, quite large land areas had been either unclaimed or uninhabited, or inhabited by nomadic peoples who were not organised as states. However, even within present-day states there are vast areas of wilderness, like the Amazon rainforest, which are uninhabited or inhabited solely or mostly by indigenous people (and some of them remain uncontacted). Also, there are so-called “failed states” which do not hold de facto control over all of their claimed territory or where this control is challenged. Currently, the international community comprises around 200 sovereign states, the vast majority of which are represented in the United Nations.

    For most of human history, people have lived in stateless societies, characterized by a lack of concentrated authority, and the absence of large inequalities in economic and political power.

    The anthropologist Tim Ingold writes:

    It is not enough to observe, in a now rather dated anthropological idiom, that hunter gatherers live in ‘stateless societies’, as though their social lives were somehow lacking or unfinished, waiting to be completed by the evolutionary development of a state apparatus. Rather, the principal of their socialty, as Pierre Clastres has put it, is fundamentally against the state.

    During the Neolithic period, human societies underwent major cultural and economic changes, including the development of agriculture, the formation of sedentary societies and fixed settlements, increasing population densities, and the use of pottery and more complex tools.

    Sedentary agriculture led to the development of property rights, domestication of plants and animals, and larger family sizes. It also provided the basis for an external centralized state. By producing a large surplus of food, more division of labor was realized, which enabled people to specialize in tasks other than food production. Early states were characterized by highly stratified societies, with a privileged and wealthy ruling class that was subordinate to a monarch. The ruling classes began to differentiate themselves through forms of architecture and other cultural practices that were different from those of the subordinate laboring classes.

    In the past, it was suggested that the centralized state was developed to administer large public works systems (such as irrigation systems) and to regulate complex economies. However, modern archaeological and anthropological evidence does not support this thesis, pointing to the existence of several non-stratified and politically decentralized complex societies.

    Mesopotamia is generally considered to be the location of the earliest civilization or complex society, meaning that it contained cities, full-time division of labor, social concentration of wealth into capital, unequal distribution of wealth, ruling classes, community ties based on residency rather than kinship, long distance trade, monumental architecture, standardized forms of art and culture, writing, and mathematics and science. It was the world’s first literate civilization, and formed the first sets of written laws. Bronze metallurgy spread within Afro-Eurasia from 3000 BC, leading to a military revolution in the use of bronze weaponry, which facilitated the rise of states.

    Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths.

    Several important political innovations of classical antiquity came from the Greek city-states and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population, and in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history.

    During medieval times in Europe, the state was organized on the principle of feudalism, and the relationship between lord and vassal became central to social organization. Feudalism led to the development of greater social hierarchies.

    The formalization of the struggles over taxation between the monarch and other elements of society (especially the nobility and the cities) gave rise to what is now called the Standestaat, or the state of Estates, characterized by parliaments in which key social groups negotiated with the king about legal and economic matters. These estates of the realm sometimes evolved in the direction of fully-fledged parliaments, but sometimes lost out in their struggles with the monarch, leading to greater centralization of lawmaking and military power in his hands. Beginning in the 15th century, this centralizing process gave rise to the absolutist state.

    Cultural and national homogenization figured prominently in the rise of the modern state system. Since the absolutist period, states have largely been organized on a national basis. The concept of a national state, however, is not synonymous with nation state. Even in the most ethnically homogeneous societies there is not always a complete correspondence between state and nation, hence the active role often taken by the state to promote nationalism through an emphasis on shared symbols and national identity.

    Charles Tilly argues that the number of total states in Western Europe declined rapidly from the Late Middle Ages to Early Modern Era during a process of state formation.Other research has disputed whether such a decline took place.

    For Edmund Burke (Dublin 1729 – Beaconsfield 1797), “a state without the means of some change is without the means of its conservation” (Reflections on the Revolution in France).

    According to Hendrik Spruyt, the modern state is different from its predecessor polities in two main aspects: (1) Modern states have a greater capacity to intervene in their societies, and (2) Modern states are buttressed by the principle of international legal sovereignty and the judicial equivalence of states. The two features began to emerge in the Late Middle Ages but the modern state form took centuries to come firmly into fruition. Other aspects of modern states is that they tend to be organized as unified national polities, and that they have rational-legal bureaucracies.

    Sovereign equality did not become fully global until after World War II amid decolonization. Adom Getachew writes that it was not until the 1960 Declaration on the Granting of Independence to Colonial Countries and Peoples that the international legal context for popular sovereignty was instituted. Historians Jane Burbank and Frederick Cooper argue that “Westphalian sovereignty” – the notion that bounded, unitary states interact with equivalent states – “has more to do with 1948 than 1648.”

    Theories for the emergence of the earliest states emphasize grain agriculture and settled populations as necessary conditions.

    However, not all types of property are equally exposed to the risk of looting or equally subject to taxation. Goods differ in their shelf life. Certain agricultural products, fish, and dairy spoil quickly and cannot be stored without refrigeration or freezing technology, which was unavailable in ancient times. As a result, such perishable goods were of little interest to either looters or the king (In ancient times, especially before the invention of money, taxation was primarily collected from agricultural produce.) Both looters and rulers sought goods with long shelf lives, such as grains (wheat, barley, rice, corn, etc.), which, under proper storage conditions, could be preserved for extended periods. With the domestication of wheat and the establishment of agricultural communities, the need for protection from bandits arose, along with the emergence of strong governance to provide it. Mayshar et al. (2020) demonstrated that societies cultivating grains tended to develop hierarchical structures with a ruling elite that collected taxes, whereas societies that relied on root crops (which have short shelf lives) did not develop such hierarchies. The cultivation of grains became concentrated in regions with fertile soil, where grain production was more profitable than root crops, even after accounting for taxes imposed by rulers and raids by looters.

    However, protection was not the only public good necessitating a centralized government. The shift to agriculture based on irrigation systems, as seen in ancient Egypt, required cooperation among farmers. An individual farmer could not control the floods from the Nile River alone. Managing the vast amounts of water during the annual floods and utilizing them efficiently allowed for a significant increase in agricultural yield, but this required an elaborate network of irrigation canals to distribute water efficiently across fields while minimizing waste.

    Such a system exhibited characteristics of a natural monopoly, as its construction involved substantial fixed costs, making it a lucrative asset for the ruling elite. Bentzen, Kaarsen, and Wingender (2017) showed that in pre-modern societies, regions dependent on irrigation-intensive agriculture experienced higher levels of land inequality. The concentration of land and control over water resources strengthened elite power, enabling them to resist democratization in the modern era. Even today, countries that rely on irrigated agriculture tend to be less democratic than those relying on rain-fed farming.

    Some argue that climate change led to a greater concentration of human populations around dwindling waterways.

    Hendrik Spruyt distinguishes between three prominent categories of explanations for the emergence of the modern state as a dominant polity: (1) Security-based explanations that emphasize the role of warfare, (2) Economy-based explanations that emphasize trade, property rights and capitalism as drivers behind state formation, and (3) Institutionalist theories that sees the state as an organizational form that is better able to resolve conflict and cooperation problems than competing political organizations.

    According to Philip Gorski and Vivek Swaroop Sharma, the “neo-Darwinian” framework for the emergence of sovereign states is the dominant explanation in the scholarship. The neo-Darwininian framework emphasizes how the modern state emerged as the dominant organizational form through natural selection and competition.

    Most political theories of the state can roughly be classified into two categories:

    “liberal” or “conservative” theories treat capitalism as a given, and then concentrate on the function of states in capitalist society. These theories tend to see the state as a neutral entity, separated from society and the economy.

    Marxist and anarchist theories, on the other hand, see politics as intimately tied in with economic relations, and emphasize the relation between economic power and political power. They see the state as a partisan instrument that primarily serves the interests of the upper class.

    Anarchism as a political philosophy regards the state and hierarchies as unnecessary and harmful, and instead promotes a stateless society, or anarchy, a self-managed, self-governed society based on voluntary, cooperative institutions.

    Anarchists believe that the state is inherently an instrument of domination and repression, no matter who is in control of it. Anarchists note that the state possesses the monopoly on the legal use of violence. Unlike Marxists, anarchists believe that revolutionary seizure of state power should not be a political goal. They believe instead that the state apparatus should be completely dismantled, and an alternative set of social relations created, which are not based on state power at all.

    Various Christian anarchists, such as Jacques Ellul, have identified the state and political power as the Beast in the Book of Revelation.

    Anarcho-capitalists such as Murray Rothbard come to some of the same conclusions about the state apparatus as anarchists, but for different reasons. The two principles that anarcho-capitalists rely on most are consent and non-initiation.Consent in anarcho-capitalist theory requires that individuals explicitly assent to the jurisdiction of the State excluding Lockean tacit consent. Consent may also create a right of secession which destroys any concept of government monopoly on force. Coercive monopolies are excluded by the non-initiation of force principle because they must use force in order to prevent others from offering the same service that they do. Anarcho-capitalists start from the belief that replacing monopolistic states with competitive providers is necessary from a normative, justice-based scenario.

    Anarcho-capitalists believe that the market values of competition and privatization can better provide the services provided by the state. Murray Rothbard argues in Power and Market that any and all government functions could better be fulfilled by private actors including: defense, infrastructure, and legal adjudication.

    Marx and Engels were clear in that the goal of communism was a classless society in which the state would have “withered away”, replaced only by “administration of things”. Their views are found throughout their Collected Works, and address past or then-extant state forms from an analytical and tactical viewpoint, but not future social forms, speculation about which is generally antithetical to groups considering themselves Marxist but who – not having conquered the existing state power(s) – are not in the situation of supplying the institutional form of an actual society. To the extent that it makes sense, there is no single “Marxist theory of state”, but rather several different purportedly “Marxist” theories have been developed by adherents of Marxism.

    Marx’s early writings portrayed the bourgeois state as parasitic, built upon the superstructure of the economy, and working against the public interest. He also wrote that the state mirrors class relations in society in general, acting as a regulator and repressor of class struggle, and as a tool of political power and domination for the ruling class. The Communist Manifesto claims the state to be nothing more than “a committee for managing the common affairs of the bourgeoisie.”

    For Marxist theorists, the role of the modern bourgeois state is determined by its function in the global capitalist order. Ralph Miliband argued that the ruling class uses the state as its instrument to dominate society by virtue of the interpersonal ties between state officials and economic elites. For Miliband, the state is dominated by an elite that comes from the same background as the capitalist class. State officials therefore share the same interests as owners of capital and are linked to them through a wide array of social, economic, and political ties.

    Gramsci’s theories of state emphasized that the state is only one of the institutions in society that helps maintain the hegemony of the ruling class, and that state power is bolstered by the ideological domination of the institutions of civil society, such as churches, schools, and mass media.

    Pluralists view society as a collection of individuals and groups, who are competing for political power. They then view the state as a neutral body that simply enacts the will of whichever groups dominate the electoral process. Within the pluralist tradition, Robert Dahl developed the theory of the state as a neutral arena for contending interests or its agencies as simply another set of interest groups. With power competitively arranged in society, state policy is a product of recurrent bargaining. Although pluralism recognizes the existence of inequality, it asserts that all groups have an opportunity to pressure the state. The pluralist approach suggests that the modern democratic state’s actions are the result of pressures applied by a variety of organized interests. Dahl called this kind of state a polyarchy.

    Pluralism has been challenged on the ground that it is not supported by empirical evidence. Citing surveys showing that the large majority of people in high leadership positions are members of the wealthy upper class, critics of pluralism claim that the state serves the interests of the upper class rather than equitably serving the interests of all social groups.

    Jürgen Habermas believed that the base-superstructure framework, used by many Marxist theorists to describe the relation between the state and the economy, was overly simplistic. He felt that the modern state plays a large role in structuring the economy, by regulating economic activity and being a large-scale economic consumer/producer, and through its redistributive welfare state activities. Because of the way these activities structure the economic framework, Habermas felt that the state cannot be looked at as passively responding to economic class interests.

    Michel Foucault believed that modern political theory was too state-centric, saying “Maybe, after all, the state is no more than a composite reality and a mythologized abstraction, whose importance is a lot more limited than many of us think.” He thought that political theory was focusing too much on abstract institutions, and not enough on the actual practices of government. In Foucault’s opinion, the state had no essence. He believed that instead of trying to understand the activities of governments by analyzing the properties of the state (a reified abstraction), political theorists should be examining changes in the practice of government to understand changes in the nature of the state. Foucault developed the concept of governmentality while considering the genealogy of state, and considers the way in which an individual’s understanding of governance can influence the function of the state.

    Foucault argues that it is technology that has created and made the state so elusive and successful and that instead of looking at the state as something to be toppled we should look at the state as a technological manifestation or system with many heads; Foucault argues instead of something to be overthrown as in the sense of the Marxist and anarchist understanding of the state. Every single scientific technological advance has come to the service of the state Foucault argues and it is with the emergence of the Mathematical sciences and essentially the formation of mathematical statistics that one gets an understanding of the complex technology of producing how the modern state was so successfully created. Foucault insists that the nation state was not a historical accident but a deliberate production in which the modern state had to now manage coincidentally with the emerging practice of the police (cameral science) ‘allowing’ the population to now ‘come in’ into jus gentium and civitas (civil society) after deliberately being excluded for several millennia. Democracy wasn’t (the newly formed voting franchise) as is always painted by both political revolutionaries and political philosophers as a cry for political freedom or wanting to be accepted by the ‘ruling elite’, Foucault insists, but was a part of a skilled endeavour of switching over new technology such as; translatio imperii, plenitudo potestatis and extra Ecclesiam nulla salus readily available from the past medieval period, into mass persuasion for the future industrial ‘political’ population (deception over the population) in which the political population was now asked to insist upon itself “the president must be elected”. Where these political symbol agents, represented by the pope and the president are now democratised. Foucault calls these new forms of technology biopower and form part of our political inheritance which he calls biopolitics.

    Heavily influenced by Gramsci, Nicos Poulantzas, a Greek neo-Marxist theorist argued that capitalist states do not always act on behalf of the ruling class, and when they do, it is not necessarily the case because state officials consciously strive to do so, but because the ‘structural’ position of the state is configured in such a way to ensure that the long-term interests of capital are always dominant. Poulantzas’ main contribution to the Marxist literature on the state was the concept of ‘relative autonomy’ of the state. While Poulantzas’ work on ‘state autonomy’ has served to sharpen and specify a great deal of Marxist literature on the state, his own framework came under criticism for its ‘structural functionalism’.

    It can be considered as a single structural universe: the historical reality that takes shape in societies characterized by a codified or crystallized right, with a power organized hierarchically and justified by the law that gives it authority, with a well-defined social and economic stratification, with an economic and social organization that gives the society precise organic characteristics, with one (or multiple) religious organizations, in justification of the power expressed by such a society and in support of the religious beliefs of individuals and accepted by society as a whole. Such a structural universe, evolves in a cyclical manner, presenting two different historical phases (a mercantile phase, or “open society”, and a feudal phase or “closed society”), with characteristics so divergent that it can qualify as two different levels of civilization which, however, are never definitive, but that alternate cyclically, being able, each of the two different levels, to be considered progressive (in a partisan way, totally independent of the real value of well-being, degrees of freedom granted, equality realized and a concrete possibility to achieve further progress of the level of civilization), even by the most cultured fractions, educated and intellectually more equipped than the various societies, of both historical phases.

    State autonomy theorists believe that the state is an entity that is impervious to external social and economic influence and that it has interests of its own.

    “New institutionalist” writings on the state, such as the works of Theda Skocpol, suggest that state actors are to an important degree autonomous. In other words, state personnel have interests of their own, which they can and do pursue independently of (and at times in conflict with) actors in society. Since the state controls the means of coercion, and given the dependence of many groups in civil society on the state for achieving any goals they may espouse, state personnel can to some extent impose their own preferences on civil society.

    States generally rely on a claim to some form of political legitimacy in order to maintain domination over their subjects.

    The rise of the modern-day state system was closely related to changes in political thought, especially concerning the changing understanding of legitimate state power and control. Early modern defenders of absolutism (Absolute monarchy), such as Thomas Hobbes and Jean Bodin undermined the doctrine of the divine right of kings by arguing that the power of kings should be justified by reference to the people. Hobbes in particular went further to argue that political power should be justified with reference to the individual (Hobbes wrote in the time of the English Civil War), not just to the people understood collectively. Both Hobbes and Bodin thought they were defending the power of kings, not advocating for democracy, but their arguments about the nature of sovereignty were fiercely resisted by more traditional defenders of the power of kings, such as Sir Robert Filmer in England, who thought that such defenses ultimately opened the way to more democratic claims.

    Max Weber identified three main sources of political legitimacy in his works. The first, legitimacy based on traditional grounds is derived from a belief that things should be as they have been in the past, and that those who defend these traditions have a legitimate claim to power. The second, legitimacy based on charismatic leadership, is devotion to a leader or group that is viewed as exceptionally heroic or virtuous. Max Weber’s concept of charisma is also explored by Fukuyama, who uses it to explain why individuals relinquish their personal freedoms and more egalitarian smaller communities in favor of larger, more authoritarian states. The Scholars goes further by saying that Charismatic leaders can leverage this mass mobilization as a military force, achieving victories and securing peace, which in turn further legitimizes their authority. Fukuyama cites the example of Muhammad, whose influence facilitated the rise of a powerful state in North Africa and the Middle East, despite limited economic foundations. The third is rational-legal authority, whereby legitimacy is derived from the belief that a certain group has been placed in power in a legal manner, and that their actions are justifiable according to a specific code of written laws. Weber believed that the modern state is characterized primarily by appeals to rational-legal authority.

    Some states are often labeled as “weak” or “failed”. In David Samuels’s words “…a failed state occurs when sovereignty over claimed territory has collapsed or was never effectively at all”. Authors like Samuels and Joel S. Migdal have explored the emergence of weak states, how they are different from Western “strong” states and its consequences to the economic development of developing countries.

    Samuels introduces the idea of state capacity, which he uses to refer to the ability of the state to fulfill its basic functions, such as providing security, maintaining law and order, and delivering public services. When a state does not accomplish this, state failure happens (Samuels, 2012). Other authors like Jeffrey Herbst add to this idea by arguing that state failure is the result of weak or non-existent institutions, which means that there is no state legitimacy because states are not able to provide goods or services or maintain order and safety (Herbst, 1990). However, there are also ideas that challenge this notion of state failure. Stephen D. Krasner argues that state failure is not just the result of weak institutions, but rather a very complex phenomenon that varies according to context-specific circumstances, and should therefore not be analyzed through a simplistic understanding like the one normally presented (Krasner, 2004).

    In “The Problem of Failed States”, Susan Rice argues that state failure is an important threat to global stability and security, since failed states are vulnerable to terrorism and conflict (Rice, 1994). Additionally, it is believed that state failure hinders democratic values, since these states often experience political violence, authoritarian rules, and a number of human rights abuses (Rotberg, 2004). While there is great discussion regarding the direct effects of state failure, its indirect effects should also be highlighted: state failure could lead to refugee flows and cross-border conflicts, while also becoming safe havens for criminal or extremist groups (Corbridge, 2005). In order to solve and prevent these issues in the future, it is necessary to focus on building strong institutions, promoting economic diversification and development, and addressing the causes of violence in each state (Mkandawire, 2001).

    To understand the formation of weak states, Samuels compares the formation of European states in the 1600s with the conditions under which more recent states were formed in the twentieth century. In this line of argument, the state allows a population to resolve a collective action problem, in which citizens recognize the authority of the state and exercise the power of coercion over them. This kind of social organization required a decline in the legitimacy of traditional forms of ruling (like religious authorities) and replaced them with an increase in the legitimacy of depersonalized rule; an increase in the central government’s sovereignty; and an increase in the organizational complexity of the central government (bureaucracy).

    The transition to this modern state was possible in Europe around 1600 thanks to the confluence of factors like the technological developments in warfare, which generated strong incentives to tax and consolidate central structures of governance to respond to external threats. This was complemented by the increase in the production of food (as a result of productivity improvements), which allowed to sustain a larger population and so increased the complexity and centralization of states. Finally, cultural changes challenged the authority of monarchies and paved the way for the emergence of modern states.

    The conditions that enabled the emergence of modern states in Europe were different for other countries that started this process later. As a result, many of these states lack effective capabilities to tax and extract revenue from their citizens, which derives in problems like corruption, tax evasion and low economic growth. Unlike the European case, late state formation occurred in a context of limited international conflict that diminished the incentives to tax and increase military spending. Also, many of these states emerged from colonization in a state of poverty and with institutions designed to extract natural resources, which have made more difficult to form states. European colonization also defined many arbitrary borders that mixed different cultural groups under the same national identities, which has made difficult to build states with legitimacy among all the population, since some states have to compete for it with other forms of political identity.

    As a complement to this argument, Migdal gives a historical account on how sudden social changes in the Third World during the Industrial Revolution contributed to the formation of weak states. The expansion of international trade that started around 1850, brought profound changes in Africa, Asia and Latin America that were introduced with the objective of assure the availability of raw materials for the European market. These changes consisted in: i) reforms to landownership laws with the objective of integrate more lands to the international economy, ii) increase in the taxation of peasants and little landowners, as well as collecting of these taxes in cash instead of in kind as was usual up to that moment and iii) the introduction of new and less costly modes of transportation, mainly railroads. As a result, the traditional forms of social control became obsolete, deteriorating the existing institutions and opening the way to the creation of new ones, that not necessarily lead these countries to build strong states. This fragmentation of the social order induced a political logic in which these states were captured to some extent by “strongmen”, who were capable to take advantage of the above-mentioned changes and that challenge the sovereignty of the state. As a result, these decentralization of social control impedes to consolidate strong states.

  • Customary law

    A legal custom is the established pattern of behavior within a particular social setting. A claim can be carried out in defense of “what has always been done and accepted by law”.

    Customary law (also, consuetudinary or unofficial law) exists where:

    a certain legal practice is observed and the relevant actors consider it to be an opinion of law or necessity (opinio juris). Most customary laws deal with standards of the community that have been long-established in a given locale. However, the term can also apply to areas of international law where certain standards have been nearly universal in their acceptance as correct bases of action – for example, laws against piracy or slavery (see hostis humani generis). In many, though not all instances, customary laws will have supportive court rulings and case law that have evolved over time to give additional weight to their rule as law and also to demonstrate the trajectory of evolution (if any) in the judicial interpretation of such law by relevant courts.

    A central issue regarding the recognition of custom is determining the appropriate methodology to know what practices and norms actually constitute customary law. It is not immediately clear that classic Western theories of jurisprudence can be reconciled in any useful way with conceptual analyses of customary law, and thus some scholars (like John Comaroff and Simon Roberts) have characterized customary law norms in their own terms. Yet, there clearly remains some disagreement, which is seen in John Hund’s critique of Comaroff and Roberts’ theory, and preference for the contributions of H. L. A. Hart. Hund argues that Hart’s The Concept of Law solves the conceptual problem with which scholars who have attempted to articulate how customary law principles may be identified, defined, and how they operate in regulating social behavior and resolving disputes.

    Customary law is the set of customs, practices and beliefs that are accepted as obligatory rules of conduct by a community.

    Comaroff and Roberts’ famous work, “Rules and Processes”, attempted to detail the body of norms that constitute Tswana law in a way that was less legalistic (or rule-oriented) than had Isaac Schapera. They defined “mekgwa le melao ya Setswana” in terms of Casalis and Ellenberger’s definition: melao therefore being rules pronounced by a chief and mekgwa as norms that become customary law through traditional usage. Importantly, however, they noted that the Tswana seldom attempt to classify the vast array of existing norms into categories and they thus termed this the ‘undifferentiated nature of the normative repertoire’. Moreover, they observe the co-existence of overtly incompatible norms that may breed conflict, either due to circumstances in a particular situation or inherently due to their incongruous content. This lack of rule classification and failure to eradicate internal inconsistencies between potentially conflicting norms allows for much flexibility in dispute settlement and is also viewed as a ‘strategic resource’ for disputants who seek to advance their own success in a case. The latter incongruities (especially inconsistencies of norm content) are typically solved by elevating one of the norms (tacitly) from ‘the literal to the symbolic. This allows for the accommodation of both as they now theoretically exist in different realms of reality. This is highly contextual, which further illustrates that norms cannot be viewed in isolation and are open to negotiation. Thus, although there are a small number of so-called non-negotiable norms, the vast majority are viewed and given substance contextually, which is seen as fundamental to the Tswana.

    Comaroff and Roberts describe how outcomes of specific cases have the ability to change the normative repertoire, as the repertoire of norms is seen to be both in a state of formation and transformation at all times. These changes are justified on the grounds that they are merely giving recognition to de facto observations of transformation. Furthermore, the legitimacy of a chief is a direct determinant of the legitimacy of his decisions. In the formulation of legislative pronouncements, as opposed to decisions made in dispute resolution, the chief first speaks of the proposed norm with his advisors, then council of headmen, then the public assembly debate the proposed law and may accept or reject it. A chief can proclaim the law even if the public assembly rejects it, but this is not often done; and, if the chief proclaims the legislation against the will of the public assembly, the legislation will become melao, however, it is unlikely that it will be executed because its effectiveness depends on the chief’s legitimacy and the norm’s consistency with the practices (and changes in social relations) and will of the people under that chief.

    Regarding the invocation of norms in disputes, Comaroff and Roberts used the term, “paradigm of argument”, to refer to the linguistic and conceptual frame used by a disputant, whereby ‘a coherent picture of relevant events and actions in terms of one or more implicit or explicit normative referents’ is created. In their explanation, the complainant (who always speaks first) thus establishes a paradigm the defendant can either accept and therefore argue within that specific paradigm or reject and therefore introduce his or her own paradigm (usually, the facts are not contested here). If the defendant means to change the paradigm, they will refer to norms as such, where actually norms are not ordinarily explicitly referenced in Tswana dispute resolution as the audience would typically already know them and just the way one presents one’s case and constructs the facts will establish one’s paradigm. The headman or chief adjudicating may also do same: accept the normative basis implied by the parties (or one of them), and thus not refer to norms using explicit language but rather isolate a factual issue in the dispute and then make a decision on it without expressly referring to any norms, or impose a new or different paradigm onto the parties.

    Hund finds Comaroff and Roberts’ flexibility thesis of a ‘repertoire of norms’ from which litigants and adjudicator choose in the process of negotiating solutions between them uncompelling. He is therefore concerned with disproving what he calls “rule scepticism” on their part. He notes that the concept of custom generally denotes convergent behaviour, but not all customs have the force of law. Hund therefore draws from Hart’s analysis distinguishing social rules, which have internal and external aspects, from habits, which have only external aspects. Internal aspects are the reflective attitude on the part of adherents toward certain behaviours perceived to be obligatory, according to a common standard. External aspects manifest in regular, observable behaviour, but is not obligatory. In Hart’s analysis, then, social rules amount to custom that has legal force.

    Hart identifies three further differences between habits and binding social rules. First, a social rule exists where society frowns on deviation from the habit and attempts to prevent departures by criticising such behaviour. Second, when this criticism is seen socially as a good reason for adhering to the habit, and it is welcomed. And, third, when members of a group behave in a common way not only out of habit or because everyone else is doing it, but because it is seen to be a common standard that should be followed, at least by some members. Hund, however, acknowledges the difficulty of an outsider knowing the dimensions of these criteria that depend on an internal point of view.

    For Hund, the first form of rule scepticism concerns the widely held opinion that, because the content of customary law derives from practice, there are actually no objective rules, since it is only behaviour that informs their construction. On this view, it is impossible to distinguish between behaviour that is rule bound and behaviour that is not—i.e., which behaviour is motivated by adherence to law (or at least done in recognition of the law) and is merely a response to other factors. Hund sees this as problematic because it makes quantifying the law almost impossible, since behaviour is obviously inconsistent. Hund argues that this is a misconception based on a failure to acknowledge the importance of the internal element. In his view, by using the criteria described above, there is not this problem in deciphering what constitutes “law” in a particular community.

    According to Hund, the second form of rule scepticism says that, though a community may have rules, those rules are not arrived at ‘deductively’, i.e. they are not created through legal/moral reasoning only but are instead driven by the personal/political motives of those who create them. The scope for such influence is created by the loose and undefined nature of customary law, which, Hund argues, grants customary-lawmakers (often through traditional ‘judicial processes’) a wide discretion in its application. Yet, Hund contends that the fact that rules might sometimes be arrived at in the more ad hoc way, does not mean that this defines the system. If one requires a perfect system, where laws are created only deductively, then one is left with a system with no rules. For Hund, this cannot be so and an explanation for these kinds of law-making processes is found in Hart’s conception of “secondary rules” (rules in terms of which the main body of norms are recognised). Hund therefore says that for some cultures, for instance in some sections of Tswana society, the secondary rules have developed only to the point where laws are determined with reference to politics and personal preference. This does not mean that they are not “rules”. Hund argues that if we acknowledge a developmental pattern in societies’ constructions of these secondary rules then we can understand how this society constructs its laws and how it differs from societies that have come to rely on an objective, stand-alone body of rules.

    The modern codification of civil law developed from the tradition of medieval custumals, collections of local customary law that developed in a specific manorial or borough jurisdiction, and which were slowly pieced together mainly from case law and later written down by local jurists. Custumals acquired the force of law when they became the undisputed rule by which certain rights, entitlements, and obligations were regulated between members of a community. Some examples include Bracton’s De Legibus et Consuetudinibus Angliae for England, the Coutume de Paris for the city of Paris, the Sachsenspiegel for northern Germany, and the many fueros of Spain.

    In international law, customary law refers to the Law of Nations or the legal norms that have developed through the customary exchanges between states over time, whether based on diplomacy or aggression. Essentially, legal obligations are believed to arise between states to carry out their affairs consistently with past accepted conduct. These customs can also change based on the acceptance or rejection by states of particular acts. Some principles of customary law have achieved the force of peremptory norms, which cannot be violated or altered except by a norm of comparable strength. These norms are said to gain their strength from universal acceptance, such as the prohibitions against genocide and slavery. Customary international law can be distinguished from treaty law, which consists of explicit agreements between nations to assume obligations. However, many treaties are attempts to codify pre-existing customary law.

    Customary law is a recognized source of law within jurisdictions of the civil law tradition, where it may be subordinate to both statutes and regulations. In addressing custom as a source of law within the civil law tradition, John Henry Merryman notes that, though the attention it is given in scholarly works is great, its importance is “slight and decreasing”. On the other hand, in many countries around the world, one or more types of customary law continue to exist side by side with official law, a condition referred to as legal pluralism (see also List of national legal systems).

    In the canon law of the Catholic Church, custom is a source of law. Canonical jurisprudence, however, differs from civil law jurisprudence in requiring the express or implied consent of the legislator for a custom to obtain the force of law.

    In the English common law, “long usage” must be established.

    It is a broad principle of property law that, if something has gone on for a long time without objection, whether it be using a right of way or occupying land to which one has no title, the law will eventually recognise the fact and give the person doing it the legal right to continue.

    It is known in case law as “customary rights”. Something which has been practised since time immemorial by reference to a particular locality may acquire the legal status of a custom, which is a form of local law. The legal criteria defining a custom are precise. The most common claim in recent times, is for customary rights to moor a vessel.

    The mooring must have been in continuous use for “time immemorial” which is defined by legal precedent as 12 years (or 20 years for Crown land) for the same purpose by people using them for that purpose. To give two examples: a custom of mooring which might have been established in past times for over two hundred years by the fishing fleet of local inhabitants of a coastal community will not simply transfer so as to benefit present day recreational boat owners who may hail from much further afield. Whereas a group of houseboats on a mooring that has been in continuous use for the last 25 years with a mixture of owner occupiers and rented houseboats, may clearly continue to be used by houseboats, where the owners live in the same town or city. Both the purpose of the moorings and the class of persons benefited by the custom must have been clear and consistent.

    In Canada, customary aboriginal law has a constitutional foundation and for this reason has increasing influence.

    In the Scandinavian countries customary law continues to exist and has great influence.

    Customary law is also used in some developing countries, usually used alongside common or civil law. For example, in Ethiopia, despite the adoption of legal codes based on civil law in the 1950s according to Dolores Donovan and Getachew Assefa there are more than 60 systems of customary law currently in force, “some of them operating quite independently of the formal state legal system”. They offer two reasons for the relative autonomy of these customary law systems: one is that the Ethiopian government lacks sufficient resources to enforce its legal system to every corner of Ethiopia; the other is that the Ethiopian government has made a commitment to preserve these customary systems within its boundaries.

    In 1995, President of Kyrgyzstan Askar Akaev announced a decree to revitalize the aqsaqal courts of village elders. The courts would have jurisdiction over property, torts and family law. The aqsaqal courts were eventually included under Article 92 of the Kyrgyz constitution. As of 2006, there were approximately 1,000 aqsaqal courts throughout Kyrgyzstan, including in the capital of Bishkek. Akaev linked the development of these courts to the rekindling of Kyrgyz national identity. In a 2005 speech, he connected the courts back to the country’s nomadic past and extolled how the courts expressed the Kyrgyz ability of self-governance. Similar aqsaqal courts exist, with varying levels of legal formality, in other countries of Central Asia.

    The Somali people in the Horn of Africa follow a customary law system referred to as xeer. It survives to a significant degree everywhere in Somalia and in the Somali communities in the Ogaden. Economist Peter Leeson attributes the increase in economic activity since the fall of the Siad Barre administration to the security in life, liberty and property provided by Xeer in large parts of Somalia. The Dutch attorney Michael van Notten also draws upon his experience as a legal expert in his comprehensive study on Xeer, The Law of the Somalis: A Stable Foundation for Economic Development in the Horn of Africa (2005).

    In India many customs are accepted by law. For example, Hindu marriage ceremonies are recognized by the Hindu Marriage Act.

    In Indonesia, customary adat laws of the country’s various indigenous ethnicities are recognized, and customary dispute resolution is recognized in Papua. Indonesian adat law are mainly divided into 19 circles, namely Aceh, Gayo, Alas, and Batak, Minangkabau, South Sumatra, the Malay regions, Bangka and Belitung, Kalimantan, Minahasa, Gorontalo, Toraja, South Sulawesi, Ternate, the Molluccas, Papua, Timor, Bali and Lombok, Central and East Java including the island of Madura, Sunda, and the Javanese monarchies, including the Yogyakarta Sultanate, Surakarta Sunanate, and the Pakualaman and Mangkunegaran princely states.

    In the Philippines, the Indigenous Peoples’ Rights Act of 1997 recognizes customary laws of indigenous peoples within their domain.

  • Francis Fukuyama

    Francis Yoshihiro Fukuyama (/ˌfuːkuːˈjɑːmə/; born October 27, 1952) is an American political scientist, political economist, international relations scholar, and writer.

    Fukuyama is best known for his book The End of History and the Last Man (1992), which argues that the worldwide spread of liberal democracies and free-market capitalism of the West and its lifestyle may signal the end point of humanity’s sociocultural evolution and political struggle and become the final form of human government, an assessment meeting with numerous and substantial criticisms. In his subsequent book Trust: Social Virtues and Creation of Prosperity (1995), he modified his earlier position to acknowledge that culture cannot be cleanly separated from economics. Fukuyama is also associated with the rise of the neoconservative movement, from which he has since distanced himself.

    Fukuyama has been a senior fellow at the Freeman Spogli Institute for International Studies since July 2010 and the Mosbacher Director of the Center on Democracy, Development and the Rule of Law at Stanford University. In August 2019, he was named director of the Ford Dorsey Master’s in International Policy at Stanford.

    Before that, he served as a professor and director of the International Development program at the School of Advanced International Studies of Johns Hopkins University. He had also been the Omer L. and Nancy Hirst Professor of Public Policy at the School of Public Policy at George Mason University.

    He is a council member of the International Forum for Democratic Studies founded by the National Endowment for Democracy and was a member of the Political Science Department of the RAND Corporation. He is also one of the 25 leading figures on the Information and Democracy Commission launched by Reporters Without Borders. In 2024, he received the Riggs Award for Lifetime Achievement in International and Comparative Public Administration.

    Francis Fukuyama was born in the Hyde Park neighborhood of Chicago, Illinois, United States. His paternal grandfather fled the Russo-Japanese War in 1905 and started a shop on the west coast before being incarcerated in the Second World War. His father, Yoshio Fukuyama, a second-generation Japanese American, was trained as a minister in the Congregational Church, received a doctorate in sociology from the University of Chicago, and taught religious studies. His mother, Toshiko Kawata Fukuyama (河田敏子), was born in Kyoto, Japan, and was the daughter of Shiro Kawata, founder of the Economics Department of Kyoto University and first president of Osaka City University. Francis, whose Japanese name is Yoshihiro, grew up in Manhattan as an only child, had little contact with Japanese culture, and did not learn Japanese. His family moved to State College, Pennsylvania, in 1967.

    Fukuyama received his Bachelor of Arts degree in classics from Cornell University, where he studied political philosophy under Allan Bloom. He initially pursued graduate studies in comparative literature at Yale University, going to Paris for six months to study under Roland Barthes and Jacques Derrida but became disillusioned and switched to political science at Harvard University. There, he studied with Samuel P. Huntington and Harvey Mansfield, among others. He earned his Ph.D. in political science at Harvard for his thesis on Soviet threats to intervene in the Middle East. In 1979, he joined the global policy think tank RAND Corporation.

    Fukuyama lived at the Telluride House and has been affiliated with the Telluride Association since his undergraduate years at Cornell. Telluride is an education enterprise that has been home to other significant leaders and intellectuals, including Steven Weinberg, Paul Wolfowitz and Kathleen Sullivan.

    Fukuyama was the Omer L. and Nancy Hirst Professor of Public Policy in the School of Public Policy at George Mason University from 1996 to 2000. Until July 10, 2010, he was the Bernard L. Schwartz Professor of International Political Economy and Director of the International Development Program at the Paul H. Nitze School of Advanced International Studies of Johns Hopkins University in Washington, D.C. He is now Olivier Nomellini Senior Fellow and resident in the Center on Democracy, Development, and the Rule of Law at the Freeman Spogli Institute for International Studies at Stanford University, and director of the Ford Dorsey Master’s in International Policy at Stanford.

  • Rule of law

    The rule of law is a political and legal ideal that all people and institutions within a political body are accountable to the same laws, including lawmakers, government officials, and judges.It is sometimes stated simply as “no one is above the law” or “all are equal before the law”. According to Encyclopædia Britannica, it is defined as “the mechanism, process, institution, practice, or norm that supports the equality of all citizens before the law, secures a nonarbitrary form of government, and more generally prevents the arbitrary use of power.”

    Use of the phrase can be traced to 16th-century Britain. In the following century, Scottish theologian Samuel Rutherford employed it in arguing against the divine right of kings. John Locke wrote that freedom in society means being subject only to laws written by a legislature that apply to everyone, with a person being otherwise free from both governmental and private restrictions of liberty. The phrase “rule of law” was further popularized in the 19th century by British jurist A. V. Dicey. However, the principle, if not the phrase itself, was recognized by ancient thinkers. Aristotle wrote: “It is more proper that law should govern than any one of the citizens.”

    The term rule of law is closely related to constitutionalism as well as Rechtsstaat. It refers to a political situation, not to any specific legal rule.Distinct is the rule of man, where one person or group of persons rule arbitrarily.

    History

    Although credit for popularizing the expression “the rule of law” in modern times is usually given to A. V. Dicey, development of the legal concept can be traced through history to many ancient civilizations, including ancient Greece, Mesopotamia, India, and Rome.

    Early history (to 15th century)

    The earliest conception of rule of law can be traced back to the Indian epics Ramayana and Mahabharata – the earliest versions of which date around to 8th or 9th centuries BC.The Mahabharata deals with the concepts of Dharma (used to mean law and duty interchangeably), Rajdharma (duty of the king) and Dharmaraja. It states in one of its slokas that,”The people should execute a king who does not protect them, but deprives them of their property and assets and who takes no advice or guidance from any one. Such a king is not a king but misfortune.”

    Other sources for the philosophy of rule of law can be traced to the Upanishads which state that, “The law is the king of the kings. No one is higher than the law. Not even the king.” Other commentaries include Kautilya’s Arthashastra (4th-century BC), Manusmriti (dated to the 1st to 3rd century CE), Yajnavalkya-Smriti (dated between the 3rd and 5th century CE), Brihaspati Smriti (dated between 15 CE and 16 CE).

    Modern period (1500 CE – present)

    In 1481, during the reign of Ferdinand II of Aragon, the Constitució de l’Observança was approved by the General Court of Catalonia, establishing the submission of royal power (included its officers) to the laws of the Principality of Catalonia.

    In 1607, English Chief Justice Sir Edward Coke said in the Case of Prohibitions “that the law was the golden met-wand and measure to try the causes of the subjects; and which protected His Majesty in safety and peace: with which the King (James I) was greatly offended, and said, that then he should be under the law, which was treason to affirm, as he said; to which I said, that Bracton saith, quod Rex non debet esse sub homine, sed sub Deo et lege (that the King ought not to be under any man but under God and the law.).”

    Among the first modern authors to use the term and give the principle theoretical foundations was Samuel Rutherford in Lex, Rex (1644). The title, Latin for “the law is king”, subverts the traditional formulation rex lex (“the king is law”). James Harrington wrote in Oceana (1656), drawing principally on Aristotle’s Politics, that among forms of government an “Empire of Laws, and not of Men” was preferable to an “Empire of Men, and not of Laws”.

    John Locke also discussed this issue in his Second Treatise of Government (1690):

    The natural liberty of man is to be free from any superior power on earth, and not to be under the will or legislative authority of man, but to have only the law of nature for his rule. The liberty of man, in society, is to be under no other legislative power, but that established, by consent, in the commonwealth; nor under the dominion of any will, or restraint of any law, but what that legislative shall enact, according to the trust put in it. Freedom then is not what Sir Robert Filmer tells us, Observations, A. 55. a liberty for every one to do what he lists, to live as he pleases, and not to be tied by any laws: but freedom of men under government is, to have a standing rule to live by, common to every one of that society, and made by the legislative power erected in it; a liberty to follow my own will in all things, where the rule prescribes not; and not to be subject to the inconstant, uncertain, unknown, arbitrary will of another man: as freedom of nature is, to be under no other restraint but the law of nature.

    The principle was also discussed by Montesquieu in The Spirit of Law (1748). The phrase “rule of law” appears in Samuel Johnson’s Dictionary (1755).

    In 1776, the notion that no one is above the law was popular during the founding of the United States. For example, Thomas Paine wrote in his pamphlet Common Sense that “in America, the law is king. For as in absolute governments the King is law, so in free countries the law ought to be king; and there ought to be no other.” In 1780, John Adams enshrined this principle in Article VI of the Declaration of Rights in the Constitution of the Commonwealth of Massachusetts:

    No man, nor corporation, or association of men, have any other title to obtain advantages, or particular and exclusive privileges, distinct from those of the community, than what arises from the consideration of services rendered to the public; and this title being in nature neither hereditary, nor transmissible to children, or descendants, or relations by blood, the idea of a man born a magistrate, lawgiver, or judge, is absurd and unnatural.

    The term “rule of law” was popularised by British jurist A. V. Dicey, who viewed the rule of law in common law systems as comprising three principles. First, that government must follow the law that it makes; second, that no one is exempt from the operation of the law and that it applies equally to all; and third, that general rights emerge from particular cases decided by the courts.

    The influence of Britain, France and the United States contributed to spreading the principle of the rule of law to other countries around the world.

    Legal theory and philosophy

    The Oxford English Dictionary has defined rule of law as:

    The authority and influence of law in society, esp. when viewed as a constraint on individual and institutional behaviour; (hence) the principle whereby all members of a society (including those in government) are considered equally subject to publicly disclosed legal codes and processes.

    Despite wide use by politicians, judges and academics, the rule of law has been described as “an exceedingly elusive notion”. In modern legal theory, there are at least two principal conceptions of the rule of law: a formalist or “thin” definition, and a substantive or “thick” definition. Formalist definitions of the rule of law do not make a judgment about the justness of law itself, but define specific procedural attributes that a legal framework must have in order to be in compliance with the rule of law. Substantive conceptions of the rule of law, generally from more recent authors, go beyond this and include certain substantive rights that are said to be based on, or derived from, the rule of law. One occasionally encounters a third “functional” conception.

    The functional interpretation of the term rule of law contrasts the rule of law with the rule of man. According to the functional view, a society in which government officers have a great deal of discretion has a low degree of “rule of law”, whereas a society in which government officers have little discretion has a high degree of “rule of law”. Upholding the rule of law can sometimes require the punishment of those who commit offenses that are justifiable under natural law but not statutory law.[55] The rule of law is thus somewhat at odds with flexibility, even when flexibility may be preferable.

    Social science analyses

    Economics

    Economists and lawyers have studied and analysed the rule of law’s impact on economic development. In particular, a major question in the area of law and economics is whether the rule of law matters to economic development, particularly in developing nations. The economist F. A. Hayek analyzed how the rule of law might be beneficial to the free market. Hayek proposed that under the rule of law, individuals would be able to make wise investments and future plans with some confidence in a successful return on investment when he stated: “under the Rule of Law the government is prevented from stultifying individual efforts by ad hoc action. Within the known rules of the game the individual is free to pursue his personal ends and desires, certain that the powers of government will not be used deliberately to frustrate his efforts.”

    Studies have shown that weak rule of law (for example, discretionary regulatory enforcement) discourages investment. Economists have found, for example, that a rise in discretionary regulatory enforcement caused US firms to abandon international investments.

    Constitutional economics is the study of the compatibility of economic and financial decisions within existing constitutional law frameworks. Aspects of constitutional frameworks relevant to both the rule of law and public economics include government spending on the judiciary, which, in many transitional and developing countries, is completely controlled by the executive. Additionally, judicial corruption may arise from both the executive branch and private actors. Standards of constitutional economics such as transparency can also be used during annual budget processes for the benefit of the rule of law. Further, the availability of an effective court system in situations of unfair government spending and executive impoundment of previously authorized appropriations is a key element for the success of the rule of law.

    Nobel laureates (2024) Daron Acemoglu and James A. Robinson emphasize the importance of the rule of law in their book Why Nations Fail. They argue that the rule of law ensures that laws apply equally to everyone, including elites and government officials. This principle is crucial for promoting inclusive institutions, which are key to sustained economic growth and prosperity.

    The authors highlight historical examples, such as the French Revolution, where the rule of law helped dismantle absolutism and feudal privileges, paving the way for inclusive institutions. They also discuss how pluralistic political institutions are essential for the rule of law to thrive, as they create broad coalitions that support fairness and equality.

    Comparative approaches

    The term “rule of law” has been used primarily in the English-speaking countries, and it is not yet fully clarified with regard to such well-established democracies such as Sweden, Denmark, France, Germany, or Japan. A common language between lawyers of common law and civil law countries is critically important for research of links between the rule of law and real economy.

    The rule of law can be hampered when there is a disconnect between legal and popular consensus. For example, under the auspices of the World Intellectual Property Organization, nominally strong copyright laws have been implemented throughout most of the world; but because the attitude of much of the population does not conform to these laws, a rebellion against ownership rights has manifested in rampant piracy, including an increase in peer-to-peer file sharing. Similarly, in Russia, tax evasion is common and a person who admits he does not pay taxes is not judged or criticized by his colleagues and friends, because the tax system is viewed as unreasonable. Bribery likewise has different normative implications across cultures.

    Education

    UNESCO has argued that education has an important role in promoting the rule of law and a culture of lawfulness, providing an important protective function by strengthening learners’ abilities to face and overcome difficult life situations. Young people can be important contributors to a culture of lawfulness, and governments can provide educational support that nurtures positive values and attitudes in future generations. A movement towards education for justice seeks to promote the rule of law in schools.

    Political Science

    Francis Fukuyama in his book The origins of political order puts The Rule of Law as a requirement for stability.

    Status in various jurisdictions

    The rule of law has been considered one of the key dimensions that determine the quality and good governance of a country. Research, like the Worldwide Governance Indicators, defines the rule of law as “the extent to which agents have confidence and abide by the rules of society, and in particular the quality of contract enforcement, the police and the courts, as well as the likelihood of crime or violence.” Based on this definition the Worldwide Governance Indicators project has developed aggregate measurements for the rule of law in more than 200 countries, as seen in the map at right. Other evaluations such as the World Justice Project Rule of Law Index show that adherence to rule of law fell in 61% of countries in 2022. Globally, this means that 4.4 billion people live in countries where rule of law declined in 2021.

    United States

    All government officers of the United States, including the President, Justices of the Supreme Court, state judges and legislators, and all members of Congress, pledge first and foremost to uphold the Constitution, affirming that the rule of law is superior to the rule of any human leader. At the same time, the federal government has considerable discretion: the legislative branch is free to decide what statutes it will write, as long as it stays within its enumerated powers and respects the constitutionally protected rights of individuals. Likewise, the judicial branch has a degree of judicial discretion, and the executive branch also has various discretionary powers including prosecutorial discretion.

    James Wilson said during the Philadelphia Convention in 1787 that, “Laws may be unjust, may be unwise, may be dangerous, may be destructive; and yet not be so unconstitutional as to justify the Judges in refusing to give them effect.” George Mason agreed that judges “could declare an unconstitutional law void. But with regard to every law, however unjust, oppressive or pernicious, which did not come plainly under this description, they would be under the necessity as judges to give it a free course.” Chief Justice John Marshall a similar position in 1827: “When its existence as law is denied, that existence cannot be proved by showing what are the qualities of a law.”

    Scholars continue to debate whether the U.S. Constitution adopted a particular interpretation of the “rule of law”, and if so, which one. For example, John Harrison asserts that the word “law” in the Constitution is simply defined as that which is legally binding, rather than being “defined by formal or substantive criteria”, and therefore judges do not have discretion to decide that laws fail to satisfy such unwritten and vague criteria. Law professor Frederick Mark Gedicks disagrees, writing that Cicero, Augustine, Thomas Aquinas, and the framers of the U.S. Constitution believed that “an unjust law was not really a law at all”.

    Some modern scholars contend that the rule of law has been corroded during the past century by the instrumental view of law promoted by legal realists such as Oliver Wendell Holmes and Roscoe Pound. For example, Brian Tamanaha asserts: “The rule of law is a centuries-old ideal, but the notion that law is a means to an end became entrenched only in the course of the nineteenth and twentieth centuries.”

    Others argue that the rule of law has survived but was transformed to allow for the exercise of discretion by administrators. For much of American history, the dominant notion of the rule of law in administrative law has been some version of Dicey’s, that is, individuals should be able to challenge an administrative order by bringing suit in a court of general jurisdiction. The increased number of administrative cases led to fears that excess judicial oversight over administrative decisions would overwhelm the courts and destroy the advantages of specialization that led to the creation of administrative agencies in the first place. By 1941, a compromise had emerged. If administrators adopted procedures that more or less tracked “the ordinary legal manner” of the courts, further review of the facts by “the ordinary Courts of the land” was unnecessary. Thus Dicey’s rule of law was recast into a purely procedural form.

    On July 1, 2024, in Trump v. United States, the Supreme Court held that presidents have absolute immunity for acts committed as president within their core constitutional purview, at least presumptive immunity for official acts within the outer perimeter of their official responsibility, and no immunity for unofficial acts.Legal scholars have warned of the negative impact of this decision on the status of rule of law in the United States. Prior to that, in 1973 and 2000 the Office of Legal Counsel within the Department of Justice issued opinions saying that a sitting president cannot be indicted or prosecuted,but it is constitutional to indict and try a former president for the same offenses for which the President was impeached by the House of Representatives and acquitted by the Senate under the Impeachment Disqualification Clause of Article I, Section III.

    Numerous definitions of “rule of law” are used in United States governmental bodies. An organization’s definition might depend on that organization’s goal. For instance, military occupation or counterinsurgency campaigns may necessitate prioritising physical security over human rights. U.S. Army doctrine and U.S. Government (USG) interagency agreements might see the rule of law as a principle of governance: Outlines of different definitions are given in a JAG Corps handbook for judge advocates deployed with the US Army.

    Europe

    The preamble of the rule of law European Convention for the Protection of Human Rights and Fundamental Freedoms says “the governments of European countries which are like-minded and have a common heritage of political traditions, ideals, freedom and the rule of law”.

    In France and Germany the concepts of rule of law (Etat de droit and Rechtsstaat respectively) are analogous to the principles of constitutional supremacy and protection of fundamental rights from public authorities, particularly the legislature. France was one of the early pioneers of the ideas of the rule of law. The German interpretation is more rigid but similar to that of France and the United Kingdom.

    United Kingdom

    Main article: Rule of law in the United Kingdom
    See also: History of the constitution of the United Kingdom
    In the United Kingdom the rule of law is a long-standing principle of the way the country is governed, dating from England’s Magna Carta in 1215 and the Bill of Rights 1689. In the 19th century classic work Introduction to the Study of the Law of the Constitution (1885), A. V. Dicey, a constitutional scholar and lawyer, wrote of the twin pillars of the British constitution: the rule of law and parliamentary sovereignty.

    Asia

    East Asian cultures are influenced by two schools of thought, Confucianism, which advocated good governance as rule by leaders who are benevolent and virtuous, and legalism, which advocated strict adherence to law. The influence of one school of thought over the other has varied throughout the centuries. One study indicates that throughout East Asia, only South Korea, Singapore, Japan, Taiwan and Hong Kong have societies that are robustly committed to a law-bound state. According to Awzar Thi, a member of the Asian Human Rights Commission, the rule of law in Cambodia and most of Asia is weak or nonexistent:

    Apart from a number of states and territories, across the continent there is a huge gulf between the rule of law rhetoric and reality. In Thailand, the police force is favor over the rich and corrupted. In Cambodia, judges are proxies for the ruling political party … That a judge may harbor political prejudice or apply the law unevenly are the smallest worries for an ordinary criminal defendant in Asia. More likely ones are: Will the police fabricate the evidence? Will the prosecutor bother to show up? Will the judge fall asleep? Will I be poisoned in prison? Will my case be completed within a decade?

    In countries such as China and Vietnam, the transition to a market economy has been a major factor in a move toward the rule of law, because the rule of law is important to foreign investors and to economic development. It remains unclear whether the rule of law in countries like China and Vietnam will be limited to commercial matters or will spill into other areas as well, and if so whether that spillover will enhance prospects for related values such as democracy and human rights.

  • Bonnie Blue

    Tia Emma Billinger (born May 1999), known professionally as Bonnie Blue, is an English pornographic film actress and OnlyFans content creator. She has been controversial for her sexual content with university students and married men, her claims to have had sex with 1,057 men in one day, and her goals of having sex with as many men as possible. She made several appearances on podcasts in 2024 which generated several weeks of backlash on Twitter; a subsequent appearance on This Morning prompted 188 complaints to Ofcom. An advert featuring her and involving the online casino Stake later prompted that firm to exit the UK market.

    Tia Emma Billinger was born in May 1999 in Stapleford, Nottinghamshire. She grew up with her mother, her step-father, and two half-sisters but never knew her biological father. She attended the Friesland School and planned on being a professional dancer or midwife. She attended Vibez Danceworks in Long Eaton and, with her sister, she competed in the 2015 British Street Dance Championships. At age 15, she began dating a classmate; marrying him in Westminster in February 2022; the pair later moved to Australia. In 2023, disillusioned by a 9-to-5 job and inspired by others’ successes on TikTok, she became a webcam model and, after making more money than expected, she launched an OnlyFans page.

    Blue then began earning money by filming herself having sex with 18- and 19-year-old students, as they were her target audience. She later supplemented her money-for-sex income with married men after a student’s father became jealous, and then began making money via sex with lecturers. In March 2024, her online popularity soared after she visited Cancún and then schoolies week in Australia and freshers’ week in the UK. For the latter, she posted her address online and allowed men to queue to have sex with her at no cost, so long as they consented to it being filmed and used in her online content. She visited Nottingham and Derby in September 2024 and Birmingham in October 2024, each for a week, with the intention of having sex with as many students as possible.

    Blue made several appearances on podcasts, including Dream On with Lottie Moss and Saving Grace with GK Barry. Clips of her podcast appearances, in which she claimed to have had sex with “hundreds” of “barely legal” students, went viral online and generated significant backlash on Twitter over several weeks, with some questioning what repercussions her co-stars could suffer and others accusing her of manipulation. Some also argued that filming and distributing amateur pornography featuring 18- and 19-year-olds was a moral grey area, while journalist Sophie Wilkinson described her as “a cog in a far bigger machine” and “want[ed] to know who hurt her”. Blue later stated that those complaining about the young age of her co-stars should instead encourage the government of the United Kingdom to increase the country’s age of consent and attributed the reaction to Saving Grace on the podcast’s female audience, prompting others to accuse her of misogyny. She later reiterated her stance on married men on The Kyle and Jackie O Show. Barry later deleted the episode.

    In November 2024, after having her visas cancelled in Australia and Fiji for working without an appropriate visa, Blue appeared on the ITV daytime show This Morning, in which she debated against Ashley James over the promotion of her content. The appearance prompted 188 complaints to Ofcom. James later wrote a piece for Grazia stating that she had debated Blue, as she had found previous interviews lacking on the grounds that women had not challenged her, and men had only done so on grounds she considered patriarchal such as her body count or the opinion of her father. Claire Hubble of the i wrote that Blue’s virality was “a reflection of the outrage economy” and compared her success to that of Katie Hopkins.

  • Zheng Qinwen

    Zheng Qinwen born 8 October 2002 is a Chinese professional tennis player. She won the gold medal in women’s singles at the 2024 Paris Olympics, defeating world No. 1, Iga Swiatek, en route to becoming the first Asian tennis player to win an Olympic gold medal in singles. She reached a career-high WTA ranking of No. 5 on 11 November 2024, and is only the second Chinese player to reach the top 10 after Li Na.

    Zheng won her first WTA Tour tournament in 2023 at the Palermo Ladies Open, successfully defending the title the following year. In total, she has won five WTA Tour titles, one WTA Challenger title, and eight ITF singles titles, and was named the 2022 WTA Newcomer of the Year and the 2023 WTA Most Improved Player of the Year. She contested a major final at the 2024 Australian Open.

    Zheng was born in Shiyan, Hubei. Until the age of three, she spent time in her maternal grandmother’s home in Chengdu, Sichuan, where her mother originated. Zheng began playing tennis at age seven. Two months later, eight-year-old Zheng left her family in Shiyan to train in Wuhan. About three years later, she moved to Beijing to train with Carlos Rodriguez, the former coach of Zheng’s idol Li Na, and then moved to Barcelona (Spain) with her mother in 2019. She began working with coach Pere Riba in 2021.

  • Madison Keys

    Madison Keys (born February 17, 1995) is an American professional tennis player. She has been ranked as high as world No. 5 in singles by the WTA. Keys has won ten career singles titles, including a major at the 2025 Australian Open when she defeated world No. 1 and two-time defending champion, Aryna Sabalenka. She was also a major finalist at the 2017 US Open.

    Keys was inspired to start playing tennis after watching Venus Williams at Wimbledon on TV. Keys turned professional on her 14th birthday, becoming one of the youngest players to win a WTA Tour level match a few months later. Keys first entered the top 100 of the WTA rankings in 2013 at the age of 17. She had her first breakthrough at a major in early 2015 when she reached the semifinals of the Australian Open as a teenager. Keys debuted in the top 10 of the WTA rankings in 2016, becoming the first American woman to realize this milestone since Serena Williams 17 years earlier. She reached the US Open final in 2017, losing to friend Sloane Stephens. Following years of injury struggles and lower results, Keys won her first major title at the 2025 Australian Open, consecutively defeating world No. 2 Iga Świątek and world No. 1 Aryna Sabalenka.

    Known for a fast serve and one of the most powerful forehands in the game, Keys has used her aggressive playing style to become one of the leaders of her generation of American tennis, alongside Stephens and Sofia Kenin. She has had success on all surfaces, winning at least one title on each and having reached at least the quarterfinals of all four majors.

    Keys was born on February 17, 1995, in Rock Island, one of the Quad Cities in northwestern Illinois. Her parents Rick and Christine are both attorneys, and her father was a Division III All-American college basketball player at Augustana College. She has an older sister named Sydney and two younger sisters named Montana and Hunter, none of whom play tennis. Keys’ passion for tennis started at a young age. Her interest in the sport arose from watching Wimbledon on television when she was four years old. Keys asked her parents for a white tennis dress like the one Venus Williams was wearing, and they offered to get her one if she started playing tennis. Her father said that after this bargain, “All [Madison] did was try to hit balls into the next yard — home runs.”

    Keys started playing tennis at the Quad-City Tennis Club in Moline. She began taking lessons regularly at seven and began competing in tournaments at the age of nine. When she was ten years old, she moved to Florida with her mother and younger sisters so that she could train at the Evert Tennis Academy founded by John Evert and also partly run by his sister, International Tennis Hall of Famer Chris Evert. At first, John said that he “thought she was very athletic, a raw talent physically. She definitely needed to be cleaned up with her strokes.” Keys noted that her game was very different when she was starting out at the academy compared to how it is as a pro, saying, “I didn’t like groundstrokes, I didn’t like long points that much, so I would just run into the net and try and volley.” Nonetheless, Keys’s coaches had high hopes for her. Chris said, “At 12 years old, she’s pretty much an all-court player; she’s not one-dimensional, which is pretty rare in this day and age.”