Category: Fever Research

  • Diplomacy

    Diplomacy is the communication by representatives of state, intergovernmental, or non-governmental institutions intended to influence events in the international system.

    Diplomacy is the main instrument of foreign policy which represents the broader goals and strategies that guide a state’s interactions with the rest of the world. International treaties, agreements, alliances, and other manifestations of international relations are usually the result of diplomatic negotiations and processes. Diplomats may also help shape a state by advising government officials.

    Modern diplomatic methods, practices, and principles originated largely from 17th-century European customs. Beginning in the early 20th century, diplomacy became professionalized; the 1961 Vienna Convention on Diplomatic Relations, ratified by most of the world’s sovereign states, provides a framework for diplomatic procedures, methods, and conduct. Most diplomacy is now conducted by accredited officials, such as envoys and ambassadors, through a dedicated foreign affairs office. Diplomats operate through diplomatic missions, most commonly consulates and embassies, and rely on a number of support staff; the term diplomat is thus sometimes applied broadly to diplomatic and consular personnel and foreign ministry officials.

    The term diplomacy is derived from the 18th-century French term diplomate (“diplomat” or “diplomatist”), based on the ancient Greek diplōma, which roughly means “an object folded in two”. This reflected the practice of sovereigns providing a folded document to confer some official privilege; prior to the invention of the envelope, folding a document served to protect the privacy of its content. The term was later applied to all official documents, such as those containing agreements between governments, and thus became identified with international relations. This established history has in recent years been criticized by scholars pointing out how the term originates in the political context of the French Revolution.

    Some of the earliest known diplomatic records are the Amarna letters written between the pharaohs of the eighteenth dynasty of Egypt and the Amurru rulers of Canaan during the 14th century BC. Peace treaties were concluded between the Mesopotamian city-states of Lagash and Umma around approximately 2100 BC. Following the Battle of Kadesh in 1274 BC during the nineteenth dynasty, the pharaoh of Egypt and the ruler of the Hittite Empire created one of the first known international peace treaties, which survives in stone tablet fragments, now generally called the Egyptian–Hittite peace treaty.

    The ancient Greek city-states on some occasions dispatched envoys to negotiate specific issues, such as war and peace or commercial relations, but did not have diplomatic representatives regularly posted in each other’s territory. However, some of the functions given to modern diplomatic representatives were fulfilled by a proxenos, a citizen of the host city who had friendly relations with another city, often through familial ties. In times of peace, diplomacy was even conducted with non-Hellenistic rivals such as the Achaemenid Empire of Persia, though it was ultimately conquered by Alexander the Great of Macedon. Alexander was also adept at diplomacy, realizing that the conquest of foreign cultures would be better achieved by having his Macedonian and Greek subjects intermingle and intermarry with native populations. For instance, Alexander took as his wife a Sogdian woman of Bactria, Roxana, after the siege of the Sogdian Rock, in order to placate the rebelling populace. Diplomacy remained a necessary tool of statecraft for the great Hellenistic states that succeeded Alexander’s empire, such as the Ptolemaic Kingdom and Seleucid Empire, which fought several wars in the Near East and often negotiated peace treaties through marriage alliances.

    Relations with the Ottoman Empire were particularly important to Italian states, to which the Ottoman government was known as the Sublime Porte. The maritime republics of Genoa and Venice depended less and less upon their nautical capabilities, and more and more upon the perpetuation of good relations with the Ottomans. Interactions between various merchants, diplomats and clergymen hailing from the Italian and Ottoman empires helped inaugurate and create new forms of diplomacy and statecraft. Eventually the primary purpose of a diplomat, which was originally a negotiator, evolved into a persona that represented an autonomous state in all aspects of political affairs. It became evident that all other sovereigns felt the need to accommodate themselves diplomatically, due to the emergence of the powerful political environment of the Ottoman Empire. One could come to the conclusion that the atmosphere of diplomacy within the early modern period revolved around a foundation of conformity to Ottoman culture.

    One of the earliest realists in international relations theory was the 6th-century BC military strategist Sun Tzu (d. 496 BC), author of The Art of War. He lived during a time in which rival states were starting to pay less attention to traditional respects of tutelage to the Zhou dynasty (c. 1050–256 BC) figurehead monarchs while each vied for power and total conquest. However, a great deal of diplomacy in establishing allies, bartering land, and signing peace treaties was necessary for each warring state, and the idealized role of the “persuader/diplomat” developed.

    From the Battle of Baideng (200 BC) to the Battle of Mayi (133 BC), the Han dynasty was forced to uphold a marriage alliance and pay an exorbitant amount of tribute (in silk, cloth, grain, and other foodstuffs) to the powerful northern nomadic Xiongnu that had been consolidated by Modu Shanyu. After the Xiongnu sent word to Emperor Wen of Han (r. 180–157) that they controlled areas stretching from Manchuria to the Tarim Basin oasis city-states, a treaty was drafted in 162 BC proclaiming that everything north of the Great Wall belongs to nomads’ lands, while everything south of it would be reserved for Han Chinese. The treaty was renewed no less than nine times but did not restrain some Xiongnu tuqi from raiding Han borders. That was until the far-flung campaigns of Emperor Wu of Han (r. 141–87 BC) which shattered the unity of the Xiongnu and allowed Han to conquer the Western Regions; under Wu, in 104 BC the Han armies ventured as far Fergana in Central Asia to battle the Yuezhi who had conquered Hellenistic Greek areas.

    The Koreans and Japanese during the Chinese Tang dynasty (618–907 AD) looked to the Chinese capital of Chang’an as the hub of civilization and emulated its central bureaucracy as the model of governance. The Japanese sent frequent embassies to China in this period, although they halted these trips in 894 when the Tang seemed on the brink of collapse. After the devastating An Shi Rebellion from 755 to 763, the Tang dynasty was in no position to reconquer Central Asia and the Tarim Basin. After several conflicts with the Tibetan Empire spanning several different decades, the Tang finally made a truce and signed a peace treaty with them in 841.

    In the 11th century during the Song dynasty (960–1279), there were shrewd ambassadors such as Shen Kuo and Su Song who achieved diplomatic success with the Liao dynasty, the often hostile Khitan neighbor to the north. Both diplomats secured the rightful borders of the Song dynasty through knowledge of cartography and dredging up old court archives. There was also a triad of warfare and diplomacy between these two states and the Tangut Western Xia dynasty to the northwest of Song China (centered in modern-day Shaanxi). After warring with the Lý dynasty of Vietnam from 1075 to 1077, Song and Lý made a peace agreement in 1082 to exchange the respective lands they had captured from each other during the war.

    Long before the Tang and Song dynasties, the Chinese had sent envoys into Central Asia, India, and Persia, starting with Zhang Qian in the 2nd century BC. Another notable event in Chinese diplomacy was the Chinese embassy mission of Zhou Daguan to the Khmer Empire of Cambodia in the 13th century. Chinese diplomacy was a necessity in the distinctive period of Chinese exploration. Since the Tang dynasty (618–907 AD), the Chinese also became heavily invested in sending diplomatic envoys abroad on maritime missions into the Indian Ocean, to India, Persia, Arabia, East Africa, and Egypt. Chinese maritime activity increased dramatically during the commercialized period of the Song dynasty, with new nautical technologies, many more private ship owners, and an increasing amount of economic investors in overseas ventures.

    During the Mongol Empire (1206–1294) the Mongols created something similar to today’s diplomatic passport called paiza. The paiza was in three different types (golden, silver, and copper) depending on the envoy’s level of importance. With the paiza, there came authority that the envoy could ask for food, transport, and a place to stay from any city, village, or clan within the empire with no difficulties.

    In the 17th century, the Qing dynasty concluded a series of treaties with Czarist Russia, beginning with the Treaty of Nerchinsk in 1689. This was followed up by the Aigun Treaty and the Convention of Peking in the mid-19th century.

    As European power spread around the world in the 18th and 19th centuries so too did its diplomatic model, and Asian countries adopted syncretic or European diplomatic systems. For example, as part of diplomatic negotiations with the West over control of land and trade in China in the 19th century after the First Opium War, the Chinese diplomat Qiying gifted intimate portraits of himself to representatives from Italy, England, the United States, and France.

    Ancient India, with its kingdoms and dynasties, had a long tradition of diplomacy. The oldest treatise on statecraft and diplomacy, Arthashastra, is attributed to Kautilya (also known as Chanakya), who was the principal adviser to Chandragupta Maurya, the founder of the Maurya dynasty who ruled in the 3rd century BC. It incorporates a theory of diplomacy, of how in a situation of mutually contesting kingdoms, the wise king builds alliances and tries to checkmate his adversaries. The envoys sent at the time to the courts of other kingdoms tended to reside for extended periods of time, and Arthashastra contains advice on the deportment of the envoy, including the trenchant suggestion that “he should sleep alone”. The highest morality for the king is that his kingdom should prosper.

    A new analysis of Arthashastra brings out that hidden inside the 6,000 aphorisms of prose (sutras) are pioneering political and philosophic concepts. It covers the internal and external spheres of statecraft, politics and administration. The normative element is the political unification of the geopolitical and cultural subcontinent of India. This work comprehensively studies state governance; it urges non-injury to living creatures, or malice, as well as compassion, forbearance, truthfulness, and uprightness. It presents a rajmandala (grouping of states), a model that places the home state surrounded by twelve competing entities which can either be potential adversaries or latent allies, depending on how relations with them are managed. This is the essence of realpolitik. It also offers four upaya (policy approaches): conciliation, gifts, rupture or dissent, and force. It counsels that war is the last resort, as its outcome is always uncertain. This is the first expression of the raison d’etat doctrine, as also of humanitarian law; that conquered people must be treated fairly, and assimilated.

    The key challenge to the Byzantine Empire was to maintain a set of relations between itself and its sundry neighbors, including the Georgians, Iberians, the Germanic peoples, the Bulgars, the Slavs, the Armenians, the Huns, the Avars, the Franks, the Lombards, and the Arabs, that embodied and so maintained its imperial status. All these neighbors lacked a key resource that Byzantium had taken over from Rome, namely a formalized legal structure. When they set about forging formal political institutions, they were dependent on the empire. Whereas classical writers are fond of making a sharp distinction between peace and war, for the Byzantines diplomacy was a form of war by other means. With a regular army of 120,000–140,000 men after the losses of the 7th century, the empire’s security depended on activist diplomacy.

    Byzantium’s “Bureau of Barbarians” was the first foreign intelligence agency, gathering information on the empire’s rivals from every imaginable source. While on the surface a protocol office—its main duty was to ensure foreign envoys were properly cared for and received sufficient state funds for their maintenance, and it kept all the official translators—it clearly had a security function as well. On Strategy, from the 6th century, offers advice about foreign embassies: “Envoys who are sent to us should be received honorably and generously, for everyone holds envoys in high esteem. Their attendants, however, should be kept under surveillance to keep them from obtaining any information by asking questions of our people.”

    In Europe, early modern diplomacy’s origins are often traced to the states of Northern Italy in the early Renaissance, with the first embassies being established in the 13th century. Milan played a leading role, especially under Francesco Sforza who established permanent embassies to the other city-states of Northern Italy. Tuscany and Venice were also flourishing centers of diplomacy from the 14th century onward. It was in the Italian Peninsula that many of the traditions of modern diplomacy began, such as the presentation of an ambassador’s credentials to the head of state.

    From Italy, the practice was spread across Europe. Milan was the first to send a representative to the court of France in 1455. However, Milan refused to host French representatives, fearing they would conduct espionage and intervene in its internal affairs. As foreign powers such as France and Spain became increasingly involved in Italian politics the need to accept emissaries was recognized. Soon the major European powers were exchanging representatives. Spain was the first to send a permanent representative; it appointed an ambassador to the Court of St. James’s (i.e. England) in 1487. By the late 16th century, permanent missions became customary. The Holy Roman Emperor, however, did not regularly send permanent legates, as they could not represent the interests of all the German princes (who were in theory all subordinate to the Emperor, but in practice each independent).

    Between 1500 and 1700, the rules of modern diplomacy were further developed. French replaced Latin from about 1715. The top rank of representatives was an ambassador. At that time an ambassador was a nobleman, the rank of the noble assigned varying with the prestige of the country he was delegated to. Strict standards were developed for ambassadors, requiring them to have large residences, host lavish parties, and play an important role in the court life of their host nation. In Rome, the most prized posting for a Catholic ambassador, the French and Spanish representatives would have a retinue of up to a hundred. Even in smaller posts, ambassadors were very expensive. Smaller states would send and receive envoys, who were a rung below the ambassador. Somewhere between the two was the position of minister plenipotentiary.

    Diplomacy was a complex affair, even more so than now. The ambassadors from each state were ranked by complex levels of precedence that were much disputed. States were normally ranked by the title of the sovereign; for Catholic nations the emissary from the Vatican was paramount, then those from the kingdoms, then those from duchies and principalities. Representatives from republics were ranked the lowest (which often angered the leaders of the numerous German, Scandinavian, and Italian republics). Determining precedence between two kingdoms depended on a number of factors that often fluctuated, leading to near-constant squabbling.

    Ambassadors were often nobles with little foreign experience and no expectation of a career in diplomacy. They were supported by their embassy staff. These professionals would be sent on longer assignments and would be far more knowledgeable than the higher-ranking officials about the host country. Embassy staff would include a wide range of employees, including some dedicated to espionage. The need for skilled individuals to staff embassies was met by the graduates of universities, and this led to a great increase in the study of international law, French, and history at universities throughout Europe.

    At the same time, permanent foreign ministries began to be established in almost all European states to coordinate embassies and their staffs. These ministries were still far from their modern form, and many of them had extraneous internal responsibilities. Britain had two departments with frequently overlapping powers until 1782. They were also far smaller than they are currently. France, which boasted the largest foreign affairs department, had only some 70 full-time employees in the 1780s.

    The elements of modern diplomacy slowly spread to Eastern Europe and Russia, arriving by the early 18th century. The entire edifice would be greatly disrupted by the French Revolution and the subsequent years of warfare. The revolution would see commoners take over the diplomacy of the French state, and of those conquered by revolutionary armies. Ranks of precedence were abolished. Napoleon also refused to acknowledge diplomatic immunity, imprisoning several British diplomats accused of scheming against France.

    After the fall of Napoleon, the Congress of Vienna of 1815 established an international system of diplomatic rank. Disputes on precedence among nations (and therefore the appropriate diplomatic ranks used) were first addressed at the Congress of Aix-la-Chapelle in 1818, but persisted for over a century until after World War II, when the rank of ambassador became the norm. In between that time, figures such as the German Chancellor Otto von Bismarck were renowned for international diplomacy.

    Diplomats and historians often refer to a foreign ministry by its address: the Ballhausplatz (Vienna), the Quai d’Orsay (Paris), the Wilhelmstrasse (Berlin), Itamaraty (Brasília), and Foggy Bottom (Washington, D.C.). For the Russian foreign ministry, it was the Choristers’ Bridge (Saint Petersburg) until 1917, while “Consulta” referred to the Italian foreign ministry, based in the Palazzo della Consulta (Rome) from 1874 to 1922.

    The sanctity of diplomats has long been observed, underpinning the modern concept of diplomatic immunity. While there have been a number of cases where diplomats have been killed, this is normally viewed as a great breach of honor. Genghis Khan and the Mongols were well known for strongly insisting on the rights of diplomats, and they would often wreak horrific vengeance against any state that violated these rights.

    Diplomatic rights were established in the mid-17th century in Europe and have spread throughout the world. These rights were formalized by the 1961 Vienna Convention on Diplomatic Relations, which protects diplomats from being persecuted or prosecuted while on a diplomatic mission. If a diplomat does commit a serious crime while in a host country, he or she may be declared as persona non grata (unwanted person). Such diplomats are then often tried for the crime in their homeland.

    Diplomatic communications are also viewed as sacrosanct, and diplomats have long been allowed to carry documents across borders without being searched. The mechanism for this is the so-called “diplomatic bag” (or, in some countries, the “diplomatic pouch”). While radio and digital communication have become more standard for embassies, diplomatic pouches are still quite common and some countries, including the United States, declare entire shipping containers as diplomatic pouches to bring sensitive material (often building supplies) into a country.

    In times of hostility, diplomats are often withdrawn for reasons of personal safety, as well as in some cases when the host country is friendly but there is a perceived threat from internal dissidents. Ambassadors and other diplomats are sometimes recalled temporarily by their home countries as a way to express displeasure with the host country. In both cases, lower-level employees still remain to actually do the business of diplomacy.

    Diplomacy is closely linked to espionage or the gathering of intelligence. Embassies are bases for both diplomats and spies, and some diplomats are essentially openly acknowledged spies. For instance, the job of military attachés includes learning as much as possible about the military of the nation to which they are assigned. They do not try to hide this role and, as such, are only invited to events allowed by their hosts, such as military parades or air shows. There are also deep-cover spies operating in many embassies. These individuals are given fake positions at the embassy, but their main task is to illegally gather intelligence, usually by coordinating spy rings of locals or other spies. For the most part, spies operating out of embassies gather little intelligence themselves and their identities tend to be known by the opposition. If discovered, these diplomats can be expelled from an embassy, but for the most part counter-intelligence agencies prefer to keep these agents in situ and under close monitoring.

    The information gathered by spies plays an increasingly important role in diplomacy. Arms-control treaties would be impossible without the power of reconnaissance satellites and agents to monitor compliance. Information gleaned from espionage is useful in almost all forms of diplomacy, everything from trade agreements to border disputes.

    Nations sometimes resort to international arbitration when faced with a specific question or point of contention in need of resolution. For most of history, there were no official or formal procedures for such proceedings. They were generally accepted to abide by general principles and protocols related to international law and justice.

    Sometimes these took the form of formal arbitrations and mediations. In such cases, a commission of diplomats might be convened to hear all sides of an issue and to come some sort of ruling based on international law.

    In the modern era, much of this work is often carried out by the International Court of Justice at The Hague, or other formal commissions, agencies and tribunals, working under the United Nations. Below are some examples.

    The Hay–Herbert Treaty was enacted after the United States and Britain submitted a dispute to international mediation about the Canada–U.S. border.

    Other times, resolutions were sought through the convening of international conferences. In such cases, there are fewer ground rules and fewer formal applications of international law. However, participants are expected to guide themselves through principles of international fairness, logic, and protocol.

    Some examples of these formal conferences are:

    • Congress of Vienna (1815) – After Napoleon was defeated, there were many diplomatic questions waiting to be resolved. This included the shape of the political map of Europe, the disposition of political and nationalist claims of various ethnic groups and nationalities wishing to have some political autonomy, and the resolution of various claims by various European powers.
    • The Congress of Berlin (13 June – 13 July 1878) was a meeting of the European Great Powers and the Ottoman Empire’s leading statesmen in Berlin in 1878. In the wake of the Russo-Turkish War, 1877–78, the meeting’s aim was to reorganize conditions in the Balkans.

    Sometimes nations convene official negotiation processes to settle a specific dispute or specific issue between several nations which are parties to a dispute. These are similar to the conferences mentioned above, as there are technically no established rules or procedures. However, there are general principles and precedents which help define a course for such proceedings.

    Some examples are:

    Camp David Accords – Convened in 1978 by President Jimmy Carter of the United States, at Camp David to reach an agreement between Prime Minister Mechaem Begin of Israel and President Anwar Sadat of Egypt. After weeks of negotiation, an agreement was reached and the accords were signed, later leading directly to the Egypt–Israel peace treaty of 1979.
    Treaty of Portsmouth – Enacted after President Theodore Roosevelt brought together the delegates from Russia and Japan, to settle the Russo-Japanese War. Roosevelt’s personal intervention settled the conflict and caused him to win the Nobel Peace Prize.

    Sometimes nations convene official negotiation processes to settle a specific dispute or specific issue between several nations which are parties to a dispute. These are similar to the conferences mentioned above, as there are technically no established rules or procedures. However, there are general principles and precedents which help define a course for such proceedings.

    Some examples are:

    Camp David Accords – Convened in 1978 by President Jimmy Carter of the United States, at Camp David to reach an agreement between Prime Minister Mechaem Begin of Israel and President Anwar Sadat of Egypt. After weeks of negotiation, an agreement was reached and the accords were signed, later leading directly to the Egypt–Israel peace treaty of 1979.
    Treaty of Portsmouth – Enacted after President Theodore Roosevelt brought together the delegates from Russia and Japan, to settle the Russo-Japanese War. Roosevelt’s personal intervention settled the conflict and caused him to win the Nobel Peace Prize.

    There are a variety of diplomatic categories and diplomatic strategies employed by organizations and governments to achieve their aims, each with its own advantages and disadvantages.

    Appeasement

    Main article: Appeasement
    Appeasement is a policy of making concessions to an aggressor in order to avoid confrontation; because of its failure to prevent World War 2, appeasement is not considered a legitimate tool of modern diplomacy. “The theme that “appeasement in the face of tyranny never works and always leads to conflict eventually” is based on historical lessons.”

    Counterinsurgency

    Counterinsurgency diplomacy, or expeditionary diplomacy, developed by diplomats deployed to civil-military stabilization efforts in Iraq and Afghanistan, employs diplomats at tactical and operational levels, outside traditional embassy environments and often alongside military or peacekeeping forces. Counterinsurgency diplomacy may provide political environment advice to local commanders, interact with local leaders, and facilitate the governance efforts, functions and reach of a host government.

    Debt-trap

    Main article: Debt-trap diplomacy
    Debt-trap diplomacy is carried out in bilateral relations, with a powerful lending country seeking to saddle a borrowing nation with enormous debt so as to increase its leverage over it.

    Economic

    Main article: Economic diplomacy
    Economic diplomacy is the use of aid or other types of economic policy as a means to achieve a diplomatic agenda.

    Gunboat

    Main article: Gunboat diplomacy
    Gunboat diplomacy is the use of conspicuous displays of military power as a means of intimidation to influence others. Since it is inherently coercive, it typically lies near the edge between peace and war, and is usually exercised in the context of imperialism or hegemony. An emblematic example is the Don Pacifico Incident in 1850, in which the United Kingdom blockaded the Greek port of Piraeus in retaliation for the harming of a British subject and the failure of the Greek government to provide him with restitution.

    Hostage

    Main article: Hostage diplomacy
    Hostage diplomacy is the taking of hostages by a state or quasi-state actor to fulfill diplomatic goals. It is a type of asymmetric diplomacy often used by weaker states to pressure stronger ones. Hostage diplomacy has been practiced from prehistory to the present day.

    Humanitarian

    Humanitarian diplomacy is the set of activities undertaken by various actors with governments, (para)military organizations, or personalities in order to intervene or push intervention in a context where humanity is in danger. According to Antonio De Lauri, a research professor at the Chr. Michelsen Institute, humanitarian diplomacy “ranges from negotiating the presence of humanitarian organizations to negotiating access to civilian populations in need of protection. It involves monitoring assistance programs, promoting respect for international law, and engaging in advocacy in support of broader humanitarian goals”.

    Migration

    Migration diplomacy is the use of human migration in a state’s foreign policy. American political scientist Myron Weiner argued that international migration is intricately linked to states’ international relations. More recently, Kelly Greenhill has identified how states may employ ‘weapons of mass migration’ against target states in their foreign relations. Migration diplomacy may involve the use of refugees, labor migrants, or diasporas in states’ pursuit of international diplomacy goals. In the context of the Syrian Civil War, Syrian refugees were used in the context of Jordanian, Lebanese, and Turkish migration diplomacy.

    Nuclear

    Nuclear diplomacy is the area of diplomacy related to preventing nuclear proliferation and nuclear war. One of the most well-known (and most controversial) philosophies of nuclear diplomacy is mutually assured destruction (MAD).

    Preventive

    Main article: Preventive diplomacy
    Preventive diplomacy is carried out through quiet means (as opposed to “gun-boat diplomacy”, which is backed by the threat of force, or “public diplomacy”, which makes use of publicity). It is also understood that circumstances may exist in which the consensual use of force (notably preventive deployment) might be welcomed by parties to a conflict with a view to achieving the stabilization necessary for diplomacy and related political processes to proceed. This is to be distinguished from the use of “persuasion”, “suasion”, “influence”, and other non-coercive approaches explored below.

    Preventive diplomacy, in the view of one expert, is “the range of peaceful dispute resolution approaches mentioned in Article 33 of the UN Charter [on the pacific settlement of disputes] when applied before a dispute crosses the threshold to armed conflict.” It may take many forms, with different means employed. One form of diplomacy which may be brought to bear to prevent violent conflict (or to prevent its recurrence) is “quiet diplomacy”. When one speaks of the practice of quiet diplomacy, definitional clarity is largely absent. In part this is due to a lack of any comprehensive assessment of exactly what types of engagement qualify, and how such engagements are pursued. On the one hand, a survey of the literature reveals no precise understanding or terminology on the subject. On the other hand, concepts are neither clear nor discrete in practice. Multiple definitions are often invoked simultaneously by theorists, and the activities themselves often mix and overlap in practice.

    Public

    Main article: Public diplomacy
    Public diplomacy is the exercise of influence through communication with the general public in another nation, rather than attempting to influence the nation’s government directly. This communication may take the form of propaganda, or more benign forms such as citizen diplomacy, individual interactions between average citizens of two or more nations. Technological advances and the advent of digital diplomacy now allow instant communication with foreign citizens, and methods such as Facebook diplomacy and Twitter diplomacy are increasingly used by world leaders and diplomats.

    Quiet

    Also known as the “softly softly” approach, quiet diplomacy is the attempt to influence the behaviour of another state through secret negotiations or by refraining from taking a specific action. This method is often employed by states that lack alternative means to influence the target government, or that seek to avoid certain outcomes. For example, South Africa is described as engaging in quiet diplomacy with neighboring Zimbabwe to avoid appearing as “bullying” and subsequently engendering a hostile response. This approach can also be employed by more powerful states: U.S. President George W. Bush’s nonattendance at the 2002 World Summit on Sustainable Development constituted a form of quiet diplomacy, in response to the lack of UN support for the U.S.’ proposed invasion of Iraq.

    Science

    Main article: Science diplomacy
    Science diplomacy is the use of scientific collaborations among nations to address common problems and to build constructive international partnerships. Many experts and groups use a variety of definitions for science diplomacy. However, science diplomacy has become an umbrella term to describe a number of formal or informal technical, research-based, academic or engineering exchanges, with notable examples including CERN, the International Space Station, and ITER.

    Soft power

    Main article: Soft power
    Soft power, sometimes called “hearts and minds diplomacy”, as defined by Joseph Nye, is the cultivation of relationships, respect, or even admiration from others in order to gain influence, as opposed to more coercive approaches. Often and incorrectly confused with the practice of official diplomacy, soft power refers to non-state, culturally attractive factors that may predispose people to sympathize with a foreign culture based on affinity for its products, such as the American entertainment industry, schools and music. A country’s soft power can come from three resources: its culture (in places where it is attractive to others), its political values (when it lives up to them at home and abroad), and its foreign policies (when they are seen as legitimate and having moral authority). A particular example of soft power is the use of giant panda bears by China as a diplomatic gift, a practice known as panda diplomacy.

    City

    City diplomacy refers to cities using institutions and processes to engage relations with other actors on an international stage, with the aim of representing themselves and their interests to one another. Especially today, city administrations and networks are increasingly active in the realm of transnationally relevant questions and issues ranging from the climate crisis to migration and the promotion of smart technology. As such, cities and city networks may seek to address and re-shape national and sub-national conflicts, support their peers in the achievement of sustainable development, and achieve certain levels of regional integration and solidarity among each other. Whereas diplomacy pursued by nation-states is often said to be disconnected from the citizenry, city diplomacy fundamentally rests on its proximity to the latter and seeks to leverage these ties “to build international strategies integrating both their values and interests.”

    Training

    Most countries provide professional training for their diplomats and maintain institutions specifically for that purpose. Private institutions also exist as do establishments associated with organizations like the European Union and the United Nations.

    In early European diplomacy, a number of diplomatic ‘manuals’ were published, describing the legal aspects of early diplomacy as well as how diplomats should conduct themselves. Some manuals, such as one by Étienne Dolet, were also targeted at rulers, helping them to understand the characteristics of a good diplomat.

  • War

    War is an armed conflict between the armed forces of states, or between governmental forces and armed groups that are organized under a certain command structure and have the capacity to sustain military operations, or between such organized groups.

    It is generally characterized by widespread violence, destruction, and mortality, using regular or irregular military forces. Warfare refers to the common activities and characteristics of types of war, or of wars in general.

    Total war is warfare that is not restricted to purely legitimate military targets, and can result in massive civilian or other non-combatant suffering and casualties.

    The English word war derives from the 11th-century Old English words wyrre and werre, from Old French werre (guerre as in modern French), in turn from the Frankish *werra, ultimately deriving from the Proto-Germanic *werzō ‘mixture, confusion’. The word is related to the Old Saxon werran, Old High German werran, and the modern German verwirren, meaning ‘to confuse, to perplex, to bring into confusion’.

    Anthropologists disagree about whether warfare was common throughout human prehistory, or whether it was a more recent development, following the invention of agriculture or organised states. It is difficult to determine whether warfare occurred during the Paleolithic due to the sparseness of known remains. Some sources claim that most Middle and Upper Paleolithic societies were possibly fundamentally egalitaria and may have rarely or never engaged in organized violence between groups (i.e. war). Evidence of violent conflict appears to increase during the Mesolithic period, from around 10,000 years ago onwards.

    Raymond Case Kelly, a cultural anthropologist and ethnologist from the US, claimed that before 400,000 years ago, groups of people clashed like groups of chimpanzees, however, later they preferred “positive and peaceful social relations between neighboring groups, such as joint hunting, trading, and courtship.” In his book “Warless Societies and the Origin of War” he explores the origins of modern wars and states that high surplus product encourages conflict, so “raiding often begins in the richest environments”.

    In War Before Civilization, Lawrence H. Keeley, a professor at the University of Illinois, says approximately 90–95% of known societies throughout history engaged in at least occasional warfare, and many fought constantly. Keeley describes several styles of primitive combat such as small raids, large raids, and massacres. All of these forms of warfare were used by primitive societies, a finding supported by other researchers. Keeley explains that early war raids were not well organized, as the participants did not have any formal training. Scarcity of resources meant defensive works were not a cost-effective way to protect the society against enemy raids. William Rubinstein wrote “Pre-literate societies, even those organized in a relatively advanced way, were renowned for their studied cruelty.’”

    Since the rise of the state some 5,000 years ago, military activity has continued over much of the globe. In Europe the oldest known battlefield is thought to date to 1250 BC. The Bronze Age has been described as a key period in the intensification of warfare, with the emergence of dedicated warriors and the development of metal weapons like swords. Two other commonly named periods of increase are the Axial Age and Modern Times. The invention of gunpowder, and its eventual use in warfare, together with the acceleration of technological advances have fomented major changes to war itself.

    In Coercion, Capital, and European States, AD 990–1992, Charles Tilly, professor of history, sociology, and social science at the University of Michigan and the Columbia University, described as “the founding father of 21st-century sociology” argued that ‘War made the state, and the state made war,’ saying that wars have led to creation of states which in their turn perpetuate war. Tilly’s theory of state formation is considered dominant in the state formation literature.

    Since 1945, great power wars, interstate wars, territorial conquests and war declarations have declined in frequency. Wars have been increasingly regulated by international humanitarian law. Battle deaths and casualties have declined, in part due to advances in military medicine and despite advances in weapons. In Western Europe, since the late 18th century, more than 150 conflicts and about 600 battles have taken place, but no battle has taken place since 1945.

    However, war in some aspects has not necessarily declined. Civil wars have increased in absolute terms since 1945. A distinctive feature of war since 1945 is that combat has largely been a matter of civil wars and insurgencies. The number of civil wars declined since 1991.

    Asymmetric warfare is the methods used in conflicts between belligerents of drastically different levels of military capability or size.

    Biological warfare, or germ warfare, is the use of biological infectious agents or toxins such as bacteria, viruses, and fungi against people, plants, or animals. This can be conducted through sophisticated technologies, like cluster munitions, or with rudimentary techniques like catapulting an infected corpse behind enemy lines, and can include weaponized or non-weaponized pathogens.

    Chemical warfare involves the use of weaponized chemicals in combat. Poison gas as a chemical weapon was principally used during World War I, and resulted in over a million estimated casualties, including more than 100,000 civilians.

    Cold warfare is an intense international rivalry without direct military conflict, but with a sustained threat of it, including high levels of military preparations, expenditures, and development, and may involve active conflicts by indirect means, such as economic warfare, political warfare, covert operations, espionage, cyberwarfare, or proxy wars.

    Conventional warfare is a form of warfare between states in which nuclear, biological, chemical or radiological weapons are not used or see limited deployment.

    Cyberwarfare involves the actions by a nation-state or international organization to attack and attempt to damage another nation’s information systems.

    Insurgency is a rebellion against authority, where irregular forces take up arms to change an existing political order. An insurgency can be fought via counterinsurgency, and may also be opposed by measures to protect the population, and by political and economic actions of various kinds aimed at undermining the insurgents’ claims against the incumbent regime.

    Information warfare is the application of destructive force on a large scale against information assets and systems, against the computers and networks that support the four critical infrastructures (the power grid, communications, financial, and transportation).

    Nuclear warfare is warfare in which nuclear weapons are the primary, or a major, method of achieving capitulation.
    Radiological warfare is any form of warfare involving deliberate radiation poisoning or contamination of an area with radiological sources.

    Total war is warfare by any means possible, disregarding the laws of war, placing no limits on legitimate military targets, using weapons and tactics resulting in significant civilian casualties, or demanding a war effort requiring significant sacrifices by the friendly civilian population.

    Unconventional warfare can be defined as “military and quasi-military operations other than conventional warfare” and may use covert forces or actions such as subversion, diversion, sabotage, espionage, biowarfare, sanctions, propaganda or guerrilla warfare.

    Entities contemplating going to war and entities considering whether to end a war may formulate war aims as an evaluation/propaganda tool. War aims may stand as a proxy for national-military resolve.

    Fried defines war aims as “the desired territorial, economic, military or other benefits expected following successful conclusion of a war”.

    Tangible/intangible aims:

    Tangible war aims may involve (for example) the acquisition of territory (as in the German goal of Lebensraum in the first half of the 20th century) or the recognition of economic concessions (as in the Anglo-Dutch Wars).

    Intangible war aims – like the accumulation of credibility or reputation – may have more tangible expression (“conquest restores prestige, annexation increases power”).

    Explicit/implicit aims:

    Explicit war aims may involve published policy decisions.
    Implicit war aims can take the form of minutes of discussion, memoranda and instructions.
    Positive/negative aims:

    “Positive war aims” cover tangible outcomes.
    “Negative war aims” forestall or prevent undesired outcomes.

    War aims can change in the course of conflict and may eventually morph into “peace conditions” – the minimal conditions under which a state may cease to wage a particular war.

    Estimates for total deaths due to war vary widely. In one estimate, primitive warfare from 50,000 to 3000 BCE has been thought to have claimed 400 million±133,000 victims based on the assumption that it accounted for the 15.1% of all deaths. Ian Morris estimated that the rate could be as high as 20%. Other scholars find the prehistoric percentage much lower, around 2%, similar to the Neanderthals and ancestors of apes and primates.

    For the period 3000 BCE until 1991, estimates range from 151 million to several billion. The lowest estimate for history of 151 million was calculated by William Eckhardt. He explained his method as summing the recorded casualties and multiplying their average by the number of recorded battles or wars. This method excludes indirect deaths for premodern wars and all deaths for unrecorded wars. Few premodern wars were recorded beyond Eurasia and only 18 wars were recorded for period 3000 – 1500 BC worldwide. Later researches shifted from Eckhardt’s approach to general estimations of the percentage of population killed by wars. Azar Gat and Ian Morris both give the lowest estimate of 1% for history including all the 20th century, or about 1 billion. The highest estimates of both scholars exceed the famous “hoax” of 3,640,000,000 people killed in wars which circulated decades in scholarly literature in various countries. Gat gives 5%, or about 5 billion. Morris gives for the 20th century 2%, for 1400-1900 3% in Europe and “slightly higher” elsewhere, 5% for the ancient empires in 500 BC – AD 200, 10% for the rest of history and 20% for prehistory. His total for history is thus about 9 billion.

    The deadliest war in history, in terms of the cumulative number of deaths since its start, is World War II, from 1939 to 1945, with 70–85 million deaths, followed by the Mongol conquests at up to 60 million. As concerns a belligerent’s losses in proportion to its prewar population, the most destructive war in modern history may have been the Paraguayan War (see Paraguayan War casualties). In 2013 war resulted in 31,000 deaths, down from 72,000 deaths in 1990.

    War usually results in significant deterioration of infrastructure and the ecosystem, a decrease in social spending, famine, large-scale emigration from the war zone, and often the mistreatment of prisoners of war or civilians. For instance, of the nine million people who were on the territory of the Byelorussian SSR in 1941, some 1.6 million were killed by the Germans in actions away from battlefields, including about 700,000 prisoners of war, 500,000 Jews, and 320,000 people counted as partisans (the vast majority of whom were unarmed civilians). Another byproduct of some wars is the prevalence of propaganda by some or all parties in the conflict, and increased revenues by weapons manufacturers.

    Three of the ten most costly wars, in terms of loss of life, have been waged in the last century. These are the two World Wars, followed by the Second Sino-Japanese War (which is sometimes considered part of World War II, or as overlapping). Most of the others involved China or neighboring peoples. The death toll of World War II, being over 60 million, surpasses all other war-death-tolls.

    Military personnel subject to combat in war often suffer mental and physical injuries, including depression, posttraumatic stress disorder, disease, injury, and death.

    In every war in which American soldiers have fought in, the chances of becoming a psychiatric casualty – of being debilitated for some period of time as a consequence of the stresses of military life – were greater than the chances of being killed by enemy fire.

    —No More Heroes, Richard Gabriel

    Swank and Marchand’s World War II study found that after sixty days of continuous combat, 98% of all surviving military personnel will become psychiatric casualties. Psychiatric casualties manifest themselves in fatigue cases, confusional states, conversion hysteria, anxiety, obsessional and compulsive states, and character disorders.

    One-tenth of mobilised American men were hospitalised for mental disturbances between 1942 and 1945, and after thirty-five days of uninterrupted combat, 98% of them manifested psychiatric disturbances in varying degrees.

    —14–18: Understanding the Great War, Stéphane Audoin-Rouzeau, Annette Becker

    Additionally, it has been estimated anywhere from 18% to 54% of Vietnam war veterans suffered from posttraumatic stress disorder.

    Based on 1860 census figures, 8% of all white American males aged 13 to 43 died in the American Civil War, including about 6% in the North and approximately 18% in the South. The war remains the deadliest conflict in American history, resulting in the deaths of 620,000 military personnel. United States military casualties of war since 1775 have totaled over two million. Of the 60 million European military personnel who were mobilized in World War I, 8 million were killed, 7 million were permanently disabled, and 15 million were seriously injured.

    During Napoleon’s retreat from Moscow, more French military personnel died of typhus than were killed by the Russians. Of the 450,000 soldiers who crossed the Neman on 25 June 1812, less than 40,000 returned. More military personnel were killed from 1500 to 1914 by typhus than from military action. In addition, if it were not for modern medical advances there would be thousands more dead from disease and infection. For instance, during the Seven Years’ War, the Royal Navy reported it conscripted 184,899 sailors, of whom 133,708 (72%) died of disease or were ‘missing’. It is estimated that between 1985 and 1994, 378,000 people per year died due to war.

    Most wars have resulted in significant loss of life, along with destruction of infrastructure and resources (which may lead to famine, disease, and death in the civilian population). During the Thirty Years’ War in Europe, the population of the Holy Roman Empire was reduced by 15 to 40 percent. Civilians in war zones may also be subject to war atrocities such as genocide, while survivors may suffer the psychological aftereffects of witnessing the destruction of war. War also results in lower quality of life and worse health outcomes. A medium-sized conflict with about 2,500 battle deaths reduces civilian life expectancy by one year and increases infant mortality by 10% and malnutrition by 3.3%. Additionally, about 1.8% of the population loses access to drinking water.

    Most estimates of World War II casualties indicate around 60 million people died, 40 million of whom were civilians. Deaths in the Soviet Union were around 27 million. Since a high proportion of those killed were young men who had not yet fathered any children, population growth in the postwar Soviet Union was much lower than it otherwise would have been.

    Once a war has ended, losing nations are sometimes required to pay war reparations to the victorious nations. In certain cases, land is ceded to the victorious nations. For example, the territory of Alsace-Lorraine has been traded between France and Germany on three different occasions.

    Typically, war becomes intertwined with the economy and many wars are partially or entirely based on economic reasons. The common view among economic historians is that the Great Depression ended with the advent of World War II. Many economists believe that government spending on the war caused or at least accelerated recovery from the Great Depression, though some consider that it did not play a very large role in the recovery, though it did help in reducing unemployment. In most cases, such as the wars of Louis XIV, the Franco-Prussian War, and World War I, warfare primarily results in damage to the economy of the countries involved. For example, Russia’s involvement in World War I took such a toll on the Russian economy that it almost collapsed and greatly contributed to the start of the Russian Revolution of 1917.

    World War II was the most financially costly conflict in history; its belligerents cumulatively spent about a trillion U.S. dollars on the war effort (as adjusted to 1940 prices). The Great Depression of the 1930s ended as nations increased their production of war materials.

    By the end of the war, 70% of European industrial infrastructure was destroyed. Property damage in the Soviet Union inflicted by the Axis invasion was estimated at a value of 679 billion rubles. The combined damage consisted of complete or partial destruction of 1,710 cities and towns, 70,000 villages/hamlets, 2,508 church buildings, 31,850 industrial establishments, 40,000 mi (64,374 km) of railroad, 4100 railroad stations, 40,000 hospitals, 84,000 schools, and 43,000 public libraries.

    There are many theories about the motivations for war, but no consensus about which are most common. Military theorist Carl von Clausewitz said, “Every age has its own kind of war, its own limiting conditions, and its own peculiar preconceptions.”

    Dutch psychoanalyst Joost Meerloo held that, “War is often…a mass discharge of accumulated internal rage (where)…the inner fears of mankind are discharged in mass destruction.” Other psychoanalysts such as E.F.M. Durban and John Bowlby have argued human beings are inherently violent. This aggressiveness is fueled by displacement and projection where a person transfers his or her grievances into bias and hatred against other races, religions, nations or ideologies. By this theory, the nation state preserves order in the local society while creating an outlet for aggression through warfare.

    The Italian psychoanalyst Franco Fornari, a follower of Melanie Klein, thought war was the paranoid or projective “elaboration” of mourning. Fornari thought war and violence develop out of our “love need”: our wish to preserve and defend the sacred object to which we are attached, namely our early mother and our fusion with her. For the adult, nations are the sacred objects that generate warfare. Fornari focused upon sacrifice as the essence of war: the astonishing willingness of human beings to die for their country, to give over their bodies to their nation.

    Despite Fornari’s theory that man’s altruistic desire for self-sacrifice for a noble cause is a contributing factor towards war, few wars have originated from a desire for war among the general populace. Far more often the general population has been reluctantly drawn into war by its rulers. One psychological theory that looks at the leaders is advanced by Maurice Walsh. He argues the general populace is more neutral towards war and wars occur when leaders with a psychologically abnormal disregard for human life are placed into power. War is caused by leaders who seek war such as Napoleon and Hitler. Such leaders most often come to power in times of crisis when the populace opts for a decisive leader, who then leads the nation to war.

    Naturally, the common people don’t want war; neither in Russia nor in England nor in America, nor for that matter in Germany. That is understood. But, after all, it is the leaders of the country who determine the policy and it is always a simple matter to drag the people along, whether it is a democracy or a fascist dictatorship or a Parliament or a Communist dictatorship. … the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.

    — Hermann Göring at the Nuremberg trials, 18 April 1946

    Several theories concern the evolutionary origins of warfare. There are two main schools: One sees organized warfare as emerging in or after the Mesolithic as a result of complex social organization and greater population density and competition over resources; the other sees human warfare as a more ancient practice derived from common animal tendencies, such as territoriality and sexual competition.

    The latter school argues that since warlike behavior patterns are found in many primate species such as chimpanzees, as well as in many ant species, group conflict may be a general feature of animal social behavior. Some proponents of the idea argue that war, while innate, has been intensified greatly by developments of technology and social organization such as weaponry and states.

    Psychologist and linguist Steven Pinker argued that war-related behaviors may have been naturally selected in the ancestral environment due to the benefits of victory. He also argued that in order to have credible deterrence against other groups (as well as on an individual level), it was important to have a reputation for retaliation, causing humans to develop instincts for revenge as well as for protecting a group’s (or an individual’s) reputation (“honor”).

    Crofoot and Wrangham have argued that warfare, if defined as group interactions in which “coalitions attempt to aggressively dominate or kill members of other groups”, is a characteristic of most human societies. Those in which it has been lacking “tend to be societies that were politically dominated by their neighbors”.

    Ashley Montagu strongly denied universalistic instinctual arguments, arguing that social factors and childhood socialization are important in determining the nature and presence of warfare. Thus, he argues, warfare is not a universal human occurrence and appears to have been a historical invention, associated with certain types of human societies. Montagu’s argument is supported by ethnographic research conducted in societies where the concept of aggression seems to be entirely absent, e.g. the Chewong and Semai of the Malay peninsula. Bobbi S. Low has observed correlation between warfare and education, noting societies where warfare is commonplace encourage their children to be more aggressive.

    War can be seen as a growth of economic competition in a competitive international system. In this view wars begin as a pursuit of markets for natural resources and for wealth. War has also been linked to economic development by economic historians and development economists studying state-building and fiscal capacity. While this theory has been applied to many conflicts, such counter arguments become less valid as the increasing mobility of capital and information level the distributions of wealth worldwide, or when considering that it is relative, not absolute, wealth differences that may fuel wars. There are those on the extreme right of the political spectrum who provide support, fascists in particular, by asserting a natural right of a strong nation to whatever the weak cannot hold by force. Some centrist, capitalist, world leaders, including Presidents of the United States and U.S. Generals, expressed support for an economic view of war.

    The Marxist theory of war is quasi-economic in that it states all modern wars are caused by competition for resources and markets between great (imperialist) powers, claiming these wars are a natural result of capitalism. Marxist economists Karl Kautsky, Rosa Luxemburg, Rudolf Hilferding and Vladimir Lenin theorized that imperialism was the result of capitalist countries needing new markets. Expansion of the means of production is only possible if there is a corresponding growth in consumer demand. Since the workers in a capitalist economy would be unable to fill the demand, producers must expand into non-capitalist markets to find consumers for their goods, hence driving imperialism.

    Demographic theories can be grouped into two classes, Malthusian and youth bulge theories:

    Malthusian theories see expanding population and scarce resources as a source of violent conflict. Pope Urban II in 1095, on the eve of the First Crusade, advocating Crusade as a solution to European overpopulation, said:

    For this land which you now inhabit, shut in on all sides by the sea and the mountain peaks, is too narrow for your large population; it scarcely furnishes food enough for its cultivators. Hence it is that you murder and devour one another, that you wage wars, and that many among you perish in civil strife. Let hatred, therefore, depart from among you; let your quarrels end. Enter upon the road to the Holy Sepulchre; wrest that land from a wicked race, and subject it to yourselves.

    This is one of the earliest expressions of what has come to be called the Malthusian theory of war, in which wars are caused by expanding populations and limited resources. Thomas Malthus (1766–1834) wrote that populations always increase until they are limited by war, disease, or famine. The violent herder–farmer conflicts in Nigeria, Mali, Sudan and other countries in the Sahel region have been exacerbated by land degradation and population growth.

    According to Heinsohn, who proposed youth bulge theory in its most generalized form, a youth bulge occurs when 30 to 40 percent of the males of a nation belong to the “fighting age” cohorts from 15 to 29 years of age. It will follow periods with total fertility rates as high as 4–8 children per woman with a 15–29-year delay. Heinsohn saw both past “Christianist” European colonialism and imperialism, as well as today’s Islamist civil unrest and terrorism as results of high birth rates producing youth bulges.

    Among prominent historical events that have been attributed to youth bulges are the role played by the historically large youth cohorts in the rebellion and revolution waves of early modern Europe, including the French Revolution of 1789, and the effect of economic depression upon the largest German youth cohorts ever in explaining the rise of Nazism in Germany in the 1930s. The 1994 Rwandan genocide has also been analyzed as following a massive youth bulge. Youth bulge theory has been subjected to statistical analysis by the World Bank, Population Action International, and the Berlin Institute for Population and Development. Youth bulge theories have been criticized as leading to racial, gender and age discrimination.

    Geoffrey Parker argues that what distinguishes the “Western way of war” based in Western Europe chiefly allows historians to explain its extraordinary success in conquering most of the world after 1500:

    The Western way of war rests upon five principal foundations: technology, discipline, a highly aggressive military tradition, a remarkable capacity to innovate and to respond rapidly to the innovation of others and – from about 1500 onward – a unique system of war finance. The combination of all five provided a formula for military success….The outcome of wars has been determined less by technology, then by better war plans, the achievement of surprise, greater economic strength, and above all superior discipline.

    Parker argues that Western armies were stronger because they emphasized discipline, that is, “the ability of a formation to stand fast in the face of the enemy, where they’re attacking or being attacked, without giving way to the natural impulse of fear and panic.” Discipline came from drills and marching in formation, target practice, and creating small “artificial kinship groups: such as the company and the platoon, to enhance psychological cohesion and combat efficiency.

    Rationalism is an international relations theory or framework. Rationalism (and Neorealism (international relations)) operate under the assumption that states or international actors are rational, seek the best possible outcomes for themselves, and desire to avoid the costs of war. Under one game theory approach, rationalist theories posit all actors can bargain, would be better off if war did not occur, and likewise seek to understand why war nonetheless reoccurs. Under another rationalist game theory without bargaining, the peace war game, optimal strategies can still be found that depend upon number of iterations played. In “Rationalist Explanations for War”, James Fearon examined three rationalist explanations for why some countries engage in war:

    Issue indivisibilities

    Incentives to misrepresent or information asymmetry

    Commitment problems

    “Issue indivisibility” occurs when the two parties cannot avoid war by bargaining, because the thing over which they are fighting cannot be shared between them, but only owned entirely by one side or the other. “Information asymmetry with incentives to misrepresent” occurs when two countries have secrets about their individual capabilities, and do not agree on either: who would win a war between them, or the magnitude of state’s victory or loss. For instance, Geoffrey Blainey argues that war is a result of miscalculation of strength. He cites historical examples of war and demonstrates, “war is usually the outcome of a diplomatic crisis which cannot be solved because both sides have conflicting estimates of their bargaining power.” Thirdly, bargaining may fail due to the states’ inability to make credible commitments.

    Within the rationalist tradition, some theorists have suggested that individuals engaged in war suffer a normal level of cognitive bias, but are still “as rational as you and me”. According to philosopher Iain King, “Most instigators of conflict overrate their chances of success, while most participants underrate their chances of injury….” King asserts that “Most catastrophic military decisions are rooted in groupthink” which is faulty, but still rational. The rationalist theory focused around bargaining, which is currently under debate. The Iraq War proved to be an anomaly that undercuts the validity of applying rationalist theory to some wars.

    The statistical analysis of war was pioneered by Lewis Fry Richardson following World War I. More recent databases of wars and armed conflict have been assembled by the Correlates of War Project, Peter Brecke and the Uppsala Conflict Data Program. The following subsections consider causes of war from system, societal, and individual levels of analysis. This kind of division was first proposed by Kenneth Waltz in Man, the State, and War and has been often used by political scientists since then.

    There are several different international relations theory schools. Supporters of realism in international relations argue that the motivation of states is the quest for security, and conflicts can arise from the inability to distinguish defense from offense, which is called the security dilemma.

    Within the realist school as represented by scholars such as Henry Kissinger and Hans Morgenthau, and the neorealist school represented by scholars such as Kenneth Waltz and John Mearsheimer, two main sub-theories are:

    Balance of power theory: States have the goal of preventing a single state from becoming a hegemon, and war is the result of the would-be hegemon’s persistent attempts at power acquisition. In this view, an international system with more equal distribution of power is more stable, and “movements toward unipolarity are destabilizing.” However, evidence has shown power polarity is not actually a major factor in the occurrence of wars.

    Power transition theory: Hegemons impose stabilizing conditions on the world order, but they eventually decline, and war occurs when a declining hegemon is challenged by another rising power or aims to pre-emptively suppress them. On this view, unlike for balance-of-power theory, wars become more probable when power is more equally distributed. This “power preponderance” hypothesis has empirical support.

    The two theories are not mutually exclusive and may be used to explain disparate events according to the circumstance. Liberalism as it relates to international relations emphasizes factors such as trade, and its role in disincentivizing conflict which will damage economic relations. Critics respond that military force may sometimes be at least as effective as trade at achieving economic benefits, especially historically if not as much today. Furthermore, trade relations which result in a high level of dependency may escalate tensions and lead to conflict. Empirical data on the relationship of trade to peace are mixed, and moreover, some evidence suggests countries at war do not necessarily trade less with each other.

    Diversionary theory, also known as the “scapegoat hypothesis”, suggests the politically powerful may use war to as a diversion or to rally domestic popular support. This is supported by literature showing out-group hostility enhances in-group bonding, and a significant domestic “rally effect” has been demonstrated when conflicts begin. However, studies examining the increased use of force as a function of need for internal political support are more mixed. U.S. war-time presidential popularity surveys taken during the presidencies of several recent U.S. leaders have supported diversionary theory.

    These theories suggest differences in people’s personalities, decision-making, emotions, belief systems, and biases are important in determining whether conflicts get out of hand. For instance, it has been proposed that conflict is modulated by bounded rationality and various cognitive biases, such as prospect theory.

    The morality of war has been the subject of debate for thousands of years.

    The two principal aspects of ethics in war, according to the just war theory, are jus ad bellum and jus in bello.

    Jus ad bellum (right to war), dictates which unfriendly acts and circumstances justify a proper authority in declaring war on another nation. There are six main criteria for the declaration of a just war: first, any just war must be declared by a lawful authority; second, it must be a just and righteous cause, with sufficient gravity to merit large-scale violence; third, the just belligerent must have rightful intentions – namely, that they seek to advance good and curtail evil; fourth, a just belligerent must have a reasonable chance of success; fifth, the war must be a last resort; and sixth, the ends being sought must be proportional to means being used.

    Jus in bello (right in war), is the set of ethical rules when conducting war. The two main principles are proportionality and discrimination. Proportionality regards how much force is necessary and morally appropriate to the ends being sought and the injustice suffered. The principle of discrimination determines who are the legitimate targets in a war, and specifically makes a separation between combatants, who it is permissible to kill, and non-combatants, who it is not. Failure to follow these rules can result in the loss of legitimacy for the just-war-belligerent.

    The just war theory was foundational in the creation of the United Nations and in international law’s regulations on legitimate war.

    Lewis Coser, an American conflict theorist and sociologist, argued that conflict provides a function and a process whereby a succession of new equilibriums are created. Thus, the struggle of opposing forces, rather than being disruptive, may be a means of balancing and maintaining a social structure or society.

    Religious groups have long formally opposed or sought to limit war as in the Second Vatican Council document Gaudiem et Spes: “Any act of war aimed indiscriminately at the destruction of entire cities of extensive areas along with their population is a crime against God and man himself. It merits unequivocal and unhesitating condemnation.”

    Anti-war movements have existed for every major war in the 20th century, including, most prominently, World War I, World War II, and the Vietnam War. In the 21st century, worldwide anti-war movements occurred in response to the United States invasion of Afghanistan and Iraq. Protests opposing the War in Afghanistan occurred in Europe, Asia, and the United States.

    During a war, the parties may agree to pauses. A ceasefire is a stoppage of a war in which each side agrees with the other to suspend aggressive actions often due to mediation by a third party. Ceasefires may be declared as part of a formal treaty but also as part of an informal understanding between opposing forces. A ceasefire can be temporary with an intended end date or may be intended to last indefinitely. A ceasefire is distinct from an armistice in that the armistice is a formal end to a war whereas a ceasefire may be a temporary stoppage.

    The immediate goal of a ceasefire is to stop violence but the underlying purposes of ceasefires vary. Ceasefires may be intended to meet short-term limited needs (such as providing humanitarian aid), manage a conflict to make it less devastating, or advance efforts to peacefully resolve a dispute. An actor may not always intend for a ceasefire to advance the peaceful resolution of a conflict but instead give the actor an upper hand in the conflict (for example, by re-arming and repositioning forces or attacking an unsuspecting adversary), which creates bargaining problems that may make ceasefires less likely to be implemented and less likely to be durable if implemented.

    The durability of ceasefire agreements is affected by several factors, such as demilitarized zones, withdrawal of troops and third-party guarantees and monitoring (e.g. peacekeeping). Ceasefire agreements are more likely to be durable when they reduce incentives to attack, reduce uncertainty about the adversary’s intentions, and when mechanisms are put in place to prevent accidents from spiraling into conflict.

  • International organization

    An international organization, also known as an intergovernmental organization or an international institution, is an organization that is established by a treaty or other type of instrument governed by international law and possesses its own legal personality, such as the United Nations, the Council of Europe, African Union, Mercosur and BRICS. International organizations are composed of primarily member states, but may also include other entities, such as other international organizations, firms, and nongovernmental organizations. Additionally, entities (including states) may hold observer status.

    Examples for international organizations include: UN General Assembly, World Trade Organization, African Development Bank, UN Economic and Social Council, UN Security Council, Asian Development Bank, International Bank for Reconstruction and Development, International Monetary Fund, International Finance Corporation, Inter-American Development Bank, United Nations Environment Programme.

    Scottish law professor James Lorimer has been credited with coining the term “international organization” in a 1871 article in the Revue de Droit International et de Legislation Compare. Lorimer use the term frequently in his two-volume Institutes of the Law of Nations (1883, 1884). Other early uses of the term were by law professor Walther Schucking in works published in 1907, 1908 and 1909, and by political science professor Paul S. Reinsch in 1911. In 1935, Pitman B. Potter defined international organization as “an association or union of nations established or recognized by them for the purpose of realizing a common end”. He distinguished between bilateral and multilateral organizations on one end and customary or conventional organizations on the other end. In his 1922 book An Introduction to the Study of International Organization, Potter argued that international organization was distinct from “international intercourse” (all relations between states), “international law” (which lacks enforcement) and world government.

    International Organizations are sometimes referred to as intergovernmental organizations (IGOs), to clarify the distinction from international non-governmental organizations (INGOs), which are non-governmental organizations (NGOs) that operate internationally. These include international nonprofit organizations such as the World Organization of the Scout Movement, International Committee of the Red Cross and Médecins Sans Frontières, as well as lobby groups that represent the interests of multinational corporations.

    IGOs are established by a treaty that acts as a charter creating the group. Treaties are formed when lawful representatives (governments) of several states go through a ratification process, providing the IGO with an international legal personality. Intergovernmental organizations are an important aspect of public international law.

    Intergovernmental organizations in a legal sense should be distinguished from simple groupings or coalitions of states, such as the G7 or the Quartet. Such groups or associations have not been founded by a constituent document and exist only as task groups. Intergovernmental organizations must also be distinguished from treaties. Many treaties (such as the North American Free Trade Agreement, or the General Agreement on Tariffs and Trade before the establishment of the World Trade Organization) do not establish an independent secretariat and instead rely on the parties for their administration, for example by setting up a joint committee. Other treaties have established an administrative apparatus which was not deemed to have been granted binding legal authority. The broader concept wherein relations among three or more states are organized according to certain principles they hold in common is multilateralism.

    Intergovernmental organizations differ in function, membership, and membership criteria. They have various goals and scopes, often outlined in the treaty or charter. Some IGOs developed to fulfill a need for a neutral forum for debate or negotiation to resolve disputes. Others developed to carry out mutual interests with unified aims to preserve peace through conflict resolution and better international relations, promote international cooperation on matters such as environmental protection, to promote human rights, to promote social development (education, health care), to render humanitarian aid, and to economic development. Some are more general in scope (the United Nations) while others may have subject-specific missions (such as INTERPOL or the International Telecommunication Union and other standards organizations). Common types include:

    Worldwide or global organizations – generally open to nations worldwide as long as certain criteria are met: This category includes the United Nations (UN) and its specialized agencies, the World Health Organization, the International Telecommunication Union (ITU), the World Bank, and the International Monetary Fund (IMF). It also includes globally operating intergovernmental organizations that are not an agency of the UN, including for example: the Hague Conference on Private International Law, an operating intergovernmental organization based in The Hague that pursues the progressive unification of private international law; the International Criminal Court that adjudicates crimes defined under the Rome Statute; and the CGIAR (formerly the Consultative Group for International Agricultural Research), a global partnership that unites intergovernmental organizations engaged in research for a food-secured future.

    Cultural, linguistic, ethnic, religious, or historical organizations – open to members based on some cultural, linguistic, ethnic, religious, or historical link. Examples include the Commonwealth of Nations, Arab League, Organisation internationale de la Francophonie, Community of Portuguese Language Countries, Organization of Turkic States, International Organization of Turkic Culture, Organisation of Islamic Cooperation, and Commonwealth of Independent States (CIS).

    Economic organizations – based on macro-economic policy goals: Some are dedicated to free trade and reduction of trade barriers, e.g. World Trade Organization, International Monetary Fund. Others are focused on international development. International cartels, such as OPEC, also exist. The Organisation for Economic Co-operation and Development (OECD) was founded as an economic-policy-focused organization. An example of a recently formed economic IGO is the Bank of the South.

    Educational organizations – centered around tertiary-level study. EUCLID University was chartered as a university and umbrella organization dedicated to sustainable development in signatory countries. The United Nations has founded multiple universities, notably the United Nations University and the University for Peace, for research and education around issues relevant to the UN, such as peace and sustainable development. The United Nations also has a dedicated training arm: the United Nations Institute for Training and Research (UNITAR).
    Health and Population Organizations – based on common perceived health and population goals. These are formed to address those challenges collectively, for example, the intergovernmental partnership for population and development Partners in Population and Development.
    Regional organizations – open to members from a particular continent or other specific region of the world. This category includes the Community of Latin American and Caribbean States (CLACS), Council of Europe (CoE), European Union (EU), Eurasian Economic Union (EAEU), Energy Community, North Atlantic Treaty Organization (NATO), Economic Community of West African States (ECOWAS), Organization for Security and Co-operation in Europe (OSCE), African Union (AU), Organization of American States (OAS), Association of Caribbean States (ACS), Association of Southeast Asian Nations (ASEAN), Islamic Development Bank, Union of South American Nations, Asia Cooperation Dialogue (ACD), Pacific Islands Forum, South Asian Association for Regional Cooperation (SAARC), Asian-African Legal Consultative Organization (AALCO) and the Organisation of Eastern Caribbean States (OECS).

    In regional organizations like the European Union, African Union, NATO, ASEAN and Mercosur, there are restrictions on membership due to factors such as geography or political regimes. To enter the European Union (EU), the states require different criteria; member states need to be European, liberal-democratic political system, and be a capitalist economy.

    The oldest regional organization is the Central Commission for Navigation on the Rhine, created in 1815 by the Congress of Vienna.

    There are several different reasons a state may choose membership in an intergovernmental organization. But there are also reasons membership may be rejected.

    Reasons for participation:

    Economic rewards: In the case of the North American Free Trade Agreement (NAFTA), membership in the free trade agreement benefits the parties’ economies. For example, Mexican companies are given better access to U.S. markets due to their membership. External actors can also contribute to economic rewards and fuel the attractiveness of IGOs – notably for developing countries. For example, external donor funding from the European Union to IGOs in the Global South.
    Political influence: Smaller countries, such as Portugal and Belgium, who do not carry much political clout on the international stage, are given a substantial increase in influence through membership in IGOs such as the European Union. Also for countries with more influence such as France and Germany, IGOs are beneficial as the nation increases influence in the smaller countries’ internal affairs and expanding other nations dependence on themselves, so to preserve allegiance.

    Security: Membership in an IGO such as NATO gives security benefits to member countries. This provides an arena where political differences can be resolved.
    Democracy: It has been noted that member countries experience a greater degree of democracy and those democracies survive longer.

    Reasons for rejecting membership:

    Loss of sovereignty: Membership often comes with a loss of state sovereignty as treaties are signed that require co-operation on the part of all member states.
    Insufficient benefits: Often membership does not bring about substantial enough benefit to warrant membership in the organization.

    Attractive external options: Bilateral co-operation with external actors or competing IGOs may provide more attractive (external) policy options for member states. Thus, powerful external actors may undermine existing IGOs.

    Intergovernmental organizations are provided with privileges and immunities that are intended to ensure their independent and effective functioning. They are specified in the treaties that give rise to the organization (such as the Convention on the Privileges and Immunities of the United Nations and the Agreement on the Privileges and Immunities of the International Criminal Court), which are normally supplemented by further multinational agreements and national regulations (for example the International Organizations Immunities Act in the United States). The organizations are thereby immune from the jurisdiction of national courts. Certain privileges and immunities are also specified in the Vienna Convention on the Representation of States in their Relations with International Organizations of a Universal Character of 1975, which however has so far not been signed by 35 states and is thus not yet in force (status: 2022).

    Rather than by national jurisdiction, legal accountability is intended to be ensured by legal mechanisms that are internal to the intergovernmental organization itself and access to administrative tribunals. In the course of many court cases where private parties tried to pursue claims against international organizations, there has been a gradual realization that alternative means of dispute settlement are required as states have fundamental human rights obligations to provide plaintiffs with access to court in view of their right to a fair trial. Otherwise, the organizations’ immunities may be put in question in national and international courts. Some organizations hold proceedings before tribunals relating to their organization to be confidential, and in some instances have threatened disciplinary action should an employee disclose any of the relevant information. Such confidentiality has been criticized as a lack of transparency.

    The immunities also extend to employment law. In this regard, immunity from national jurisdiction necessitates that reasonable alternative means are available to effectively protect employees’ rights; in this context, a first instance Dutch court considered an estimated duration of proceedings before the Administrative Tribunal of the International Labour Organization of 15 years to be too long. An international organization does not pay taxes, is difficult to prosecute in court and is not obliged to provide information to any parliament.

    The United Nations focuses on five main areas: “maintaining peace and security, protecting human rights, delivering humanitarian aid, supporting sustainable development, and upholding international law”. UN agencies, such as UN Relief and Works Agency, are generally regarded as international organizations in their own right. Additionally, the United Nations has Specialized Agencies, which are organizations within the United Nations System that have their member states (often nearly identical to the UN Member States) and are governed independently by them; examples include international organizations that predate the UN, such as the International Telecommunication Union, and the Universal Postal Union, as well as organizations that were created after the UN such as the World Health Organization (which was made up of regional organizations such as PAHO that predated the UN). A few UN special agencies are very centralized in policy and decision-making, but some are decentralized; for example, the country-based projects or missions’ directors and managers can decide what they want to do in the fields.

    The UN agencies have a variety of tasks based on their specialization and their interests. The UN agencies provide different kinds of assistance to low-income countries and middle-income countries, and this assistance would be a good resource for developmental projects in developing countries. The UN has to protect against any kind of human rights violation, and in the UN system, some specialized agencies, like ILO and United Nations High Commissioner for Refugees (UNHCR), work in the human rights’ protection fields. The UN agency, ILO, is trying to end any kind of discrimination in the work field and child labor; after that, this agency promotes fundamental labor rights and to get safe and secure for the laborers. United Nations Environment Program(UNEP) is one of the UN’s (United Nations) agencies and is an international organization that coordinates U.N. activities on the environment.

    An early prominent example of an international organization is the Congress of Vienna of 1814–1815, which was an international diplomatic conference to reconstitute the European political order after the downfall of the French Emperor Napoleon. States then became the main decision makers who preferred to maintain their sovereignty as of 1648 at the Westphalian treaty that closed the 30 Years’ War in Europe.

    The first and oldest international organization—being established employing a treaty, and creating a permanent secretariat, with a global membership—was the International Telecommunication Union (founded in 1865). The first general international organization—addressing a variety of issues—was the League of Nations, founded on 10 January 1920 with a principal mission of maintaining world peace after World War I. The United Nations followed this model after World War II. This was signed on 26 June 1945, in San Francisco, at the conclusion of the United Nations Conference on International Organization, and came into force on 24 October 1945. Currently, the UN is the main IGO with its arms such as the United Nations Security Council (UNSC), the General Assembly (UNGA), the International Court of Justice (ICJ), the Secretariat (UNSA), the Trusteeship Council (UNTC) and the Economic and Social Council (ECOSOC).

    When defined as “organizations with at least three state parties, a permanent headquarters or secretariat, as well as regular meetings and budgets”, the number of IGOs in the world increased from about 60 in 1940 to about 350 in 1980, after which it has remained roughly constant.

  • State (polity)

    A state is a political entity that regulates society and the population within a definite territory. Government is considered to form the fundamental apparatus of contemporary states.

    A country often has a single state, with various administrative divisions. A state may be a unitary state or some type of federal union; in the latter type, the term “state” is sometimes used to refer to the federated polities that make up the federation, and they may have some of the attributes of a sovereign state, except being under their federation and without the same capacity to act internationally. (Other terms that are used in such federal systems may include “province”, “region” or other terms.)

    For most of prehistory, people lived in stateless societies. The earliest forms of states arose about 5,500 years ago. Over time societies became more stratified and developed institutions leading to centralised governments. These gained state capacity in conjunction with the growth of cities, which was often dependent on climate and economic development, with centralisation often spurred on by insecurity and territorial competition.

    Over time, varied forms of states developed, that used many different justifications for their existence (such as divine right, the theory of the social contract, etc.). Today, the modern nation state is the predominant form of state to which people are subject. Sovereign states have sovereignty; any ingroup’s claim to have a state faces some practical limits via the degree to which other states recognize them as such. Satellite states are states that have de facto sovereignty but are often indirectly controlled by another state.

    Definitions of a state are disputed. According to sociologist Max Weber, a “state” is a polity that maintains a monopoly on the legitimate use of violence, although other definitions are common. Absence of a state does not preclude the existence of a society, such as stateless societies like the Haudenosaunee Confederacy that “do not have either purely or even primarily political institutions or roles”. The degree and extent of governance of a state is used to determine whether it has failed.

    The word state and its cognates in some other European languages (stato in Italian, estado in Spanish and Portuguese, état in French, Staat in German and Dutch) ultimately derive from the Latin word status, meaning “condition, circumstances”. Latin status derives from stare, “to stand”, or remain or be permanent, thus providing the sacred or magical connotation of the political entity.

    The English noun state in the generic sense “condition, circumstances” predates the political sense. It was introduced to Middle English b.c.1200 both from Old French and directly from Latin.

    With the revival of the Roman law in 14th-century Europe, the term came to refer to the legal standing of persons (such as the various “estates of the realm” – noble, common, and clerical), and in particular the special status of the king. The highest estates, generally those with the most wealth and social rank, were those that held power. The word also had associations with Roman ideas (dating back to Cicero) about the “status rei publicae”, the “condition of public matters”. In time, the word lost its reference to particular social groups and became associated with the legal order of the entire society and the apparatus of its enforcement.

    The early 16th-century works of Machiavelli (especially The Prince) played a central role in popularizing the use of the word “state” in something similar to its modern sense. The contrasting of church and state still dates to the 16th century. The North American colonies were called “states” as early as the 1630s.The expression “L’État, c’est moi” (“I am the State”) attributed to Louis XIV, although probably apocryphal, is recorded in the late 18th century.

    There is no academic consensus on the definition of the state. The term “state” refers to a set of different, but interrelated and often overlapping, theories about a certain range of political phenomena. According to Walter Scheidel, mainstream definitions of the state have the following in common: “centralized institutions that impose rules, and back them up by force, over a territorially circumscribed population; a distinction between the rulers and the ruled; and an element of autonomy, stability, and differentiation. These distinguish the state from less stable forms of organization, such as the exercise of chiefly power.”

    The most commonly used definition is by Max Weber who describes the state as a compulsory political organization with a centralized government that maintains a monopoly of the legitimate use of force within a certain territory. Weber writes that the state “is a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.”

    While defining a state, it is important not to confuse it with a nation; an error that occurs frequently in common discussion. A state refers to a political unit with sovereignty over a given territory. While a state is more of a “political-legal abstraction,” the definition of a nation is more concerned with political identity and cultural or historical factors. Importantly, nations do not possess the organizational characteristics like geographic boundaries or authority figures and officials that states do. Additionally, a nation does not have a claim to a monopoly on the legitimate use of force over their populace, while a state does, as Weber indicated. An example of the instability that arises when a state does not have a monopoly on the use of force can be seen in African states which remain weak due to the lack of war which European states relied on. A state should not be confused with a government; a government is an organization that has been granted the authority to act on the behalf of a state. Nor should a state be confused with a society; a society refers to all organized groups, movements, and individuals who are independent of the state and seek to remain out of its influence.

    Neuberger offers a slightly different definition of the state with respect to the nation: the state is “a primordial, essential, and permanent expression of the genius of a specific (nation).”

    The definition of a state is also dependent on how and why they form. The contractarian view of the state suggests that states form because people can all benefit from cooperation with others and that without a state there would be chaos. The contractarian view focuses more on the alignment and conflict of interests between individuals in a state. On the other hand, the predatory view of the state focuses on the potential mismatch between the interests of the people and the interests of the state. Charles Tilly goes so far to say that states “resemble a form of organized crime and should be viewed as extortion rackets.” He argued that the state sells protection from itself and raises the question about why people should trust a state when they cannot trust one another.

    Tilly defines states as “coercion-wielding organisations that are distinct from households and kinship groups and exercise a clear priority in some respects over all other organizations within substantial territories.” Tilly includes city-states, theocracies and empires in his definition along with nation-states, but excludes tribes, lineages, firms and churches. According to Tilly, states can be seen in the archaeological record as of 6000 BC; in Europe they appeared around 990, but became particularly prominent after 1490. Tilly defines a state’s “essential minimal activities” as:

    1. War making – “eliminating or neutralizing their outside rivals”
    2. State making – “eliminating or neutralizing their rivals inside their own territory”
    3. Protection – “eliminating or neutralizing the enemies of their clients”
    4. Extraction – “acquiring the means of carrying out the first three activities”
    5. Adjudication – “authoritative settlement of disputes among members of the population”
    6. Distribution – “intervention in the allocation of goods among the members of the population”
    7. Production – “control of the creation and transformation of goods and services produced by the population”

    Importantly, Tilly makes the case that war is an essential part of state-making; that wars create states and vice versa.

    Modern academic definitions of the state frequently include the criterion that a state has to be recognized as such by the international community.

    Liberal thought provides another possible teleology of the state. According to John Locke, the goal of the state or commonwealth is “the preservation of property” (Second Treatise on Government), with ‘property’ in Locke’s work referring not only to personal possessions but also to one’s life and liberty. On this account, the state provides the basis for social cohesion and productivity, creating incentives for wealth-creation by providing guarantees of protection for one’s life, liberty and personal property. Provision of public goods is considered by some such as Adam Smith as a central function of the state, since these goods would otherwise be underprovided. Tilly has challenged narratives of the state as being the result of a societal contract or provision of services in a free market – he characterizes the state more akin as a protection racket in the vein of organized crime.

    While economic and political philosophers have contested the monopolistic tendency of states, Robert Nozick argues that the use of force naturally tends towards monopoly.

    Another commonly accepted definition of the state is the one given at the Montevideo Convention on the Rights and Duties of States in 1933. It provides that “[t]he state as a person of international law should possess the following qualifications: (a) a permanent population; (b) a defined territory; (c) government; and (d) capacity to enter into relations with the other states.” And that “(t)he federal state shall constitute a sole person in the eyes of international law.”

    Confounding the definition problem is that “state” and “government” are often used as synonyms in common conversation and even some academic discourse. According to this definition schema, the states are nonphysical persons of international law, and governments are organizations of people. The relationship between a government and its state is one of representation and authorized agency.

    Charles Tilly distinguished between empires, theocracies, city-states and nation-states. According to Michael Mann, the four persistent types of state activities are:

    1. Maintenance of internal order
    2. Military defence and aggression
    3. Maintenance of communications infrastructure
    4. Economic redistribution

    Josep Colomer distinguished between empires and states in the following way:

    1. Empires were vastly larger than states
    2. Empires lacked fixed or permanent boundaries whereas a state had fixed boundaries
    3. Empires had a “compound of diverse groups and territorial units with asymmetric links with the center” whereas a state had “supreme authority over a territory and population”
    4. Empires had multi-level, overlapping jurisdictions whereas a state sought monopoly and homogenization

    According to Michael Hechter and William Brustein, the modern state was differentiated from “leagues of independent cities, empires, federations held together by loose central control, and theocratic federations” by four characteristics:

    1. The modern state sought and achieved territorial expansion and consolidation
    2. The modern state achieved unprecedented control over social, economic, and cultural activities within its boundaries
    3. The modern state established ruling institutions that were separate from other institutions
    4. The ruler of the modern state was far better at monopolizing the means of violence

    States may be classified by political philosophers as sovereign if they are not dependent on, or subject to any other power or state. Other states are subject to external sovereignty or hegemony where ultimate sovereignty lies in another state. Many states are federated states which participate in a federal union. A federated state is a territorial and constitutional community forming part of a federation. (Compare confederacies or confederations such as Switzerland.) Such states differ from sovereign states in that they have transferred a portion of their sovereign powers to a federal government.

    One can commonly and sometimes readily (but not necessarily usefully) classify states according to their apparent make-up or focus. The concept of the nation-state, theoretically or ideally co-terminous with a “nation”, became very popular by the 20th century in Europe, but occurred rarely elsewhere or at other times. In contrast, some states have sought to make a virtue of their multi-ethnic or multinational character (Habsburg Austria-Hungary, for example, or the Soviet Union), and have emphasised unifying characteristics such as autocracy, monarchical legitimacy, or ideology. Other states, often fascist or authoritarian ones, promoted state-sanctioned notions of racial superiority. Other states may bring ideas of commonality and inclusiveness to the fore: note the res publica of ancient Rome and the Rzeczpospolita of Poland-Lithuania which finds echoes in the modern-day republic. The concept of temple states centred on religious shrines occurs in some discussions of the ancient world. Relatively small city-states, once a relatively common and often successful form of polity, have become rarer and comparatively less prominent in modern times. Modern-day independent city-states include Vatican City, Monaco, and Singapore. Other city-states survive as federated states, like the present day German city-states, or as otherwise autonomous entities with limited sovereignty, like Hong Kong, Gibraltar and Ceuta. To some extent, urban secession, the creation of a new city-state (sovereign or federated), continues to be discussed in the early 21st century in cities such as London.

    A state can be distinguished from a government. The state is the organization while the government is the particular group of people, the administrative bureaucracy that controls the state apparatus at a given time. That is, governments are the means through which state power is employed. States are served by a continuous succession of different governments. States are immaterial and nonphysical social objects, whereas governments are groups of people with certain coercive powers.

    Each successive government is composed of a specialized and privileged body of individuals, who monopolize political decision-making and are separated by status and organization from the population as a whole.

    States can also be distinguished from the concept of a “nation”, where “nation” refers to a cultural-political community of people. A nation-state refers to a situation where a single ethnicity is associated with a specific state.

    In the classical thought, the state was identified with both political society and civil society as a form of political community, while the modern thought distinguished the nation state as a political society from civil society as a form of economic society.

    Thus in the modern thought the state is contrasted with civil society.

    Antonio Gramsci believed that civil society is the primary locus of political activity because it is where all forms of “identity formation, ideological struggle, the activities of intellectuals, and the construction of hegemony take place.” and that civil society was the nexus connecting the economic and political sphere. Arising out of the collective actions of civil society is what Gramsci calls “political society”, which Gramsci differentiates from the notion of the state as a polity. He stated that politics was not a “one-way process of political management” but, rather, that the activities of civil organizations conditioned the activities of political parties and state institutions, and were conditioned by them in turn. Louis Althusser argued that civil organizations such as church, schools, and the family are part of an “ideological state apparatus” which complements the “repressive state apparatus” (such as police and military) in reproducing social relations.

    Jürgen Habermas spoke of a public sphere that was distinct from both the economic and political sphere.

    Given the role that many social groups have in the development of public policy and the extensive connections between state bureaucracies and other institutions, it has become increasingly difficult to identify the boundaries of the state. Privatization, nationalization, and the creation of new regulatory bodies also change the boundaries of the state in relation to society. Often the nature of quasi-autonomous organizations is unclear, generating debate among political scientists on whether they are part of the state or civil society. Some political scientists thus prefer to speak of policy networks and decentralized governance in modern societies rather than of state bureaucracies and direct state control over policy.

    The earliest forms of the state emerged whenever it became possible to centralize power in a durable way. Agriculture and a settled population have been attributed as necessary conditions to form states. Certain types of agriculture are more conducive to state formation, such as grain (wheat, barley, millet), because they are suited to concentrated production, taxation, and storage. Agriculture and writing are almost everywhere associated with this process: agriculture because it allowed for the emergence of a social class of people who did not have to spend most of their time providing for their own subsistence, and writing (or an equivalent of writing, like Inca quipus) because it made possible the centralization of vital information. Bureaucratization made expansion over large territories possible.

    The first known states were created in Egypt, Mesopotamia, India, China, Mesoamerica, and the Andes. It is only in relatively modern times that states have almost completely displaced alternative “stateless” forms of political organization of societies all over the planet. Roving bands of hunter-gatherers and even fairly sizable and complex tribal societies based on herding or agriculture have existed without any full-time specialized state organization, and these “stateless” forms of political organization have in fact prevailed for all of the prehistory and much of human history and civilization.

    The primary competing organizational forms to the state were religious organizations (such as the Church), and city republics.

    Since the late 19th century, virtually the entirety of the world’s inhabitable land has been parcelled up into areas with more or less definite borders claimed by various states. Earlier, quite large land areas had been either unclaimed or uninhabited, or inhabited by nomadic peoples who were not organised as states. However, even within present-day states there are vast areas of wilderness, like the Amazon rainforest, which are uninhabited or inhabited solely or mostly by indigenous people (and some of them remain uncontacted). Also, there are so-called “failed states” which do not hold de facto control over all of their claimed territory or where this control is challenged. Currently, the international community comprises around 200 sovereign states, the vast majority of which are represented in the United Nations.

    For most of human history, people have lived in stateless societies, characterized by a lack of concentrated authority, and the absence of large inequalities in economic and political power.

    The anthropologist Tim Ingold writes:

    It is not enough to observe, in a now rather dated anthropological idiom, that hunter gatherers live in ‘stateless societies’, as though their social lives were somehow lacking or unfinished, waiting to be completed by the evolutionary development of a state apparatus. Rather, the principal of their socialty, as Pierre Clastres has put it, is fundamentally against the state.

    During the Neolithic period, human societies underwent major cultural and economic changes, including the development of agriculture, the formation of sedentary societies and fixed settlements, increasing population densities, and the use of pottery and more complex tools.

    Sedentary agriculture led to the development of property rights, domestication of plants and animals, and larger family sizes. It also provided the basis for an external centralized state. By producing a large surplus of food, more division of labor was realized, which enabled people to specialize in tasks other than food production. Early states were characterized by highly stratified societies, with a privileged and wealthy ruling class that was subordinate to a monarch. The ruling classes began to differentiate themselves through forms of architecture and other cultural practices that were different from those of the subordinate laboring classes.

    In the past, it was suggested that the centralized state was developed to administer large public works systems (such as irrigation systems) and to regulate complex economies. However, modern archaeological and anthropological evidence does not support this thesis, pointing to the existence of several non-stratified and politically decentralized complex societies.

    Mesopotamia is generally considered to be the location of the earliest civilization or complex society, meaning that it contained cities, full-time division of labor, social concentration of wealth into capital, unequal distribution of wealth, ruling classes, community ties based on residency rather than kinship, long distance trade, monumental architecture, standardized forms of art and culture, writing, and mathematics and science. It was the world’s first literate civilization, and formed the first sets of written laws. Bronze metallurgy spread within Afro-Eurasia from 3000 BC, leading to a military revolution in the use of bronze weaponry, which facilitated the rise of states.

    Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths.

    Several important political innovations of classical antiquity came from the Greek city-states and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population, and in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history.

    During medieval times in Europe, the state was organized on the principle of feudalism, and the relationship between lord and vassal became central to social organization. Feudalism led to the development of greater social hierarchies.

    The formalization of the struggles over taxation between the monarch and other elements of society (especially the nobility and the cities) gave rise to what is now called the Standestaat, or the state of Estates, characterized by parliaments in which key social groups negotiated with the king about legal and economic matters. These estates of the realm sometimes evolved in the direction of fully-fledged parliaments, but sometimes lost out in their struggles with the monarch, leading to greater centralization of lawmaking and military power in his hands. Beginning in the 15th century, this centralizing process gave rise to the absolutist state.

    Cultural and national homogenization figured prominently in the rise of the modern state system. Since the absolutist period, states have largely been organized on a national basis. The concept of a national state, however, is not synonymous with nation state. Even in the most ethnically homogeneous societies there is not always a complete correspondence between state and nation, hence the active role often taken by the state to promote nationalism through an emphasis on shared symbols and national identity.

    Charles Tilly argues that the number of total states in Western Europe declined rapidly from the Late Middle Ages to Early Modern Era during a process of state formation.Other research has disputed whether such a decline took place.

    For Edmund Burke (Dublin 1729 – Beaconsfield 1797), “a state without the means of some change is without the means of its conservation” (Reflections on the Revolution in France).

    According to Hendrik Spruyt, the modern state is different from its predecessor polities in two main aspects: (1) Modern states have a greater capacity to intervene in their societies, and (2) Modern states are buttressed by the principle of international legal sovereignty and the judicial equivalence of states. The two features began to emerge in the Late Middle Ages but the modern state form took centuries to come firmly into fruition. Other aspects of modern states is that they tend to be organized as unified national polities, and that they have rational-legal bureaucracies.

    Sovereign equality did not become fully global until after World War II amid decolonization. Adom Getachew writes that it was not until the 1960 Declaration on the Granting of Independence to Colonial Countries and Peoples that the international legal context for popular sovereignty was instituted. Historians Jane Burbank and Frederick Cooper argue that “Westphalian sovereignty” – the notion that bounded, unitary states interact with equivalent states – “has more to do with 1948 than 1648.”

    Theories for the emergence of the earliest states emphasize grain agriculture and settled populations as necessary conditions.

    However, not all types of property are equally exposed to the risk of looting or equally subject to taxation. Goods differ in their shelf life. Certain agricultural products, fish, and dairy spoil quickly and cannot be stored without refrigeration or freezing technology, which was unavailable in ancient times. As a result, such perishable goods were of little interest to either looters or the king (In ancient times, especially before the invention of money, taxation was primarily collected from agricultural produce.) Both looters and rulers sought goods with long shelf lives, such as grains (wheat, barley, rice, corn, etc.), which, under proper storage conditions, could be preserved for extended periods. With the domestication of wheat and the establishment of agricultural communities, the need for protection from bandits arose, along with the emergence of strong governance to provide it. Mayshar et al. (2020) demonstrated that societies cultivating grains tended to develop hierarchical structures with a ruling elite that collected taxes, whereas societies that relied on root crops (which have short shelf lives) did not develop such hierarchies. The cultivation of grains became concentrated in regions with fertile soil, where grain production was more profitable than root crops, even after accounting for taxes imposed by rulers and raids by looters.

    However, protection was not the only public good necessitating a centralized government. The shift to agriculture based on irrigation systems, as seen in ancient Egypt, required cooperation among farmers. An individual farmer could not control the floods from the Nile River alone. Managing the vast amounts of water during the annual floods and utilizing them efficiently allowed for a significant increase in agricultural yield, but this required an elaborate network of irrigation canals to distribute water efficiently across fields while minimizing waste.

    Such a system exhibited characteristics of a natural monopoly, as its construction involved substantial fixed costs, making it a lucrative asset for the ruling elite. Bentzen, Kaarsen, and Wingender (2017) showed that in pre-modern societies, regions dependent on irrigation-intensive agriculture experienced higher levels of land inequality. The concentration of land and control over water resources strengthened elite power, enabling them to resist democratization in the modern era. Even today, countries that rely on irrigated agriculture tend to be less democratic than those relying on rain-fed farming.

    Some argue that climate change led to a greater concentration of human populations around dwindling waterways.

    Hendrik Spruyt distinguishes between three prominent categories of explanations for the emergence of the modern state as a dominant polity: (1) Security-based explanations that emphasize the role of warfare, (2) Economy-based explanations that emphasize trade, property rights and capitalism as drivers behind state formation, and (3) Institutionalist theories that sees the state as an organizational form that is better able to resolve conflict and cooperation problems than competing political organizations.

    According to Philip Gorski and Vivek Swaroop Sharma, the “neo-Darwinian” framework for the emergence of sovereign states is the dominant explanation in the scholarship. The neo-Darwininian framework emphasizes how the modern state emerged as the dominant organizational form through natural selection and competition.

    Most political theories of the state can roughly be classified into two categories:

    “liberal” or “conservative” theories treat capitalism as a given, and then concentrate on the function of states in capitalist society. These theories tend to see the state as a neutral entity, separated from society and the economy.

    Marxist and anarchist theories, on the other hand, see politics as intimately tied in with economic relations, and emphasize the relation between economic power and political power. They see the state as a partisan instrument that primarily serves the interests of the upper class.

    Anarchism as a political philosophy regards the state and hierarchies as unnecessary and harmful, and instead promotes a stateless society, or anarchy, a self-managed, self-governed society based on voluntary, cooperative institutions.

    Anarchists believe that the state is inherently an instrument of domination and repression, no matter who is in control of it. Anarchists note that the state possesses the monopoly on the legal use of violence. Unlike Marxists, anarchists believe that revolutionary seizure of state power should not be a political goal. They believe instead that the state apparatus should be completely dismantled, and an alternative set of social relations created, which are not based on state power at all.

    Various Christian anarchists, such as Jacques Ellul, have identified the state and political power as the Beast in the Book of Revelation.

    Anarcho-capitalists such as Murray Rothbard come to some of the same conclusions about the state apparatus as anarchists, but for different reasons. The two principles that anarcho-capitalists rely on most are consent and non-initiation.Consent in anarcho-capitalist theory requires that individuals explicitly assent to the jurisdiction of the State excluding Lockean tacit consent. Consent may also create a right of secession which destroys any concept of government monopoly on force. Coercive monopolies are excluded by the non-initiation of force principle because they must use force in order to prevent others from offering the same service that they do. Anarcho-capitalists start from the belief that replacing monopolistic states with competitive providers is necessary from a normative, justice-based scenario.

    Anarcho-capitalists believe that the market values of competition and privatization can better provide the services provided by the state. Murray Rothbard argues in Power and Market that any and all government functions could better be fulfilled by private actors including: defense, infrastructure, and legal adjudication.

    Marx and Engels were clear in that the goal of communism was a classless society in which the state would have “withered away”, replaced only by “administration of things”. Their views are found throughout their Collected Works, and address past or then-extant state forms from an analytical and tactical viewpoint, but not future social forms, speculation about which is generally antithetical to groups considering themselves Marxist but who – not having conquered the existing state power(s) – are not in the situation of supplying the institutional form of an actual society. To the extent that it makes sense, there is no single “Marxist theory of state”, but rather several different purportedly “Marxist” theories have been developed by adherents of Marxism.

    Marx’s early writings portrayed the bourgeois state as parasitic, built upon the superstructure of the economy, and working against the public interest. He also wrote that the state mirrors class relations in society in general, acting as a regulator and repressor of class struggle, and as a tool of political power and domination for the ruling class. The Communist Manifesto claims the state to be nothing more than “a committee for managing the common affairs of the bourgeoisie.”

    For Marxist theorists, the role of the modern bourgeois state is determined by its function in the global capitalist order. Ralph Miliband argued that the ruling class uses the state as its instrument to dominate society by virtue of the interpersonal ties between state officials and economic elites. For Miliband, the state is dominated by an elite that comes from the same background as the capitalist class. State officials therefore share the same interests as owners of capital and are linked to them through a wide array of social, economic, and political ties.

    Gramsci’s theories of state emphasized that the state is only one of the institutions in society that helps maintain the hegemony of the ruling class, and that state power is bolstered by the ideological domination of the institutions of civil society, such as churches, schools, and mass media.

    Pluralists view society as a collection of individuals and groups, who are competing for political power. They then view the state as a neutral body that simply enacts the will of whichever groups dominate the electoral process. Within the pluralist tradition, Robert Dahl developed the theory of the state as a neutral arena for contending interests or its agencies as simply another set of interest groups. With power competitively arranged in society, state policy is a product of recurrent bargaining. Although pluralism recognizes the existence of inequality, it asserts that all groups have an opportunity to pressure the state. The pluralist approach suggests that the modern democratic state’s actions are the result of pressures applied by a variety of organized interests. Dahl called this kind of state a polyarchy.

    Pluralism has been challenged on the ground that it is not supported by empirical evidence. Citing surveys showing that the large majority of people in high leadership positions are members of the wealthy upper class, critics of pluralism claim that the state serves the interests of the upper class rather than equitably serving the interests of all social groups.

    Jürgen Habermas believed that the base-superstructure framework, used by many Marxist theorists to describe the relation between the state and the economy, was overly simplistic. He felt that the modern state plays a large role in structuring the economy, by regulating economic activity and being a large-scale economic consumer/producer, and through its redistributive welfare state activities. Because of the way these activities structure the economic framework, Habermas felt that the state cannot be looked at as passively responding to economic class interests.

    Michel Foucault believed that modern political theory was too state-centric, saying “Maybe, after all, the state is no more than a composite reality and a mythologized abstraction, whose importance is a lot more limited than many of us think.” He thought that political theory was focusing too much on abstract institutions, and not enough on the actual practices of government. In Foucault’s opinion, the state had no essence. He believed that instead of trying to understand the activities of governments by analyzing the properties of the state (a reified abstraction), political theorists should be examining changes in the practice of government to understand changes in the nature of the state. Foucault developed the concept of governmentality while considering the genealogy of state, and considers the way in which an individual’s understanding of governance can influence the function of the state.

    Foucault argues that it is technology that has created and made the state so elusive and successful and that instead of looking at the state as something to be toppled we should look at the state as a technological manifestation or system with many heads; Foucault argues instead of something to be overthrown as in the sense of the Marxist and anarchist understanding of the state. Every single scientific technological advance has come to the service of the state Foucault argues and it is with the emergence of the Mathematical sciences and essentially the formation of mathematical statistics that one gets an understanding of the complex technology of producing how the modern state was so successfully created. Foucault insists that the nation state was not a historical accident but a deliberate production in which the modern state had to now manage coincidentally with the emerging practice of the police (cameral science) ‘allowing’ the population to now ‘come in’ into jus gentium and civitas (civil society) after deliberately being excluded for several millennia. Democracy wasn’t (the newly formed voting franchise) as is always painted by both political revolutionaries and political philosophers as a cry for political freedom or wanting to be accepted by the ‘ruling elite’, Foucault insists, but was a part of a skilled endeavour of switching over new technology such as; translatio imperii, plenitudo potestatis and extra Ecclesiam nulla salus readily available from the past medieval period, into mass persuasion for the future industrial ‘political’ population (deception over the population) in which the political population was now asked to insist upon itself “the president must be elected”. Where these political symbol agents, represented by the pope and the president are now democratised. Foucault calls these new forms of technology biopower and form part of our political inheritance which he calls biopolitics.

    Heavily influenced by Gramsci, Nicos Poulantzas, a Greek neo-Marxist theorist argued that capitalist states do not always act on behalf of the ruling class, and when they do, it is not necessarily the case because state officials consciously strive to do so, but because the ‘structural’ position of the state is configured in such a way to ensure that the long-term interests of capital are always dominant. Poulantzas’ main contribution to the Marxist literature on the state was the concept of ‘relative autonomy’ of the state. While Poulantzas’ work on ‘state autonomy’ has served to sharpen and specify a great deal of Marxist literature on the state, his own framework came under criticism for its ‘structural functionalism’.

    It can be considered as a single structural universe: the historical reality that takes shape in societies characterized by a codified or crystallized right, with a power organized hierarchically and justified by the law that gives it authority, with a well-defined social and economic stratification, with an economic and social organization that gives the society precise organic characteristics, with one (or multiple) religious organizations, in justification of the power expressed by such a society and in support of the religious beliefs of individuals and accepted by society as a whole. Such a structural universe, evolves in a cyclical manner, presenting two different historical phases (a mercantile phase, or “open society”, and a feudal phase or “closed society”), with characteristics so divergent that it can qualify as two different levels of civilization which, however, are never definitive, but that alternate cyclically, being able, each of the two different levels, to be considered progressive (in a partisan way, totally independent of the real value of well-being, degrees of freedom granted, equality realized and a concrete possibility to achieve further progress of the level of civilization), even by the most cultured fractions, educated and intellectually more equipped than the various societies, of both historical phases.

    State autonomy theorists believe that the state is an entity that is impervious to external social and economic influence and that it has interests of its own.

    “New institutionalist” writings on the state, such as the works of Theda Skocpol, suggest that state actors are to an important degree autonomous. In other words, state personnel have interests of their own, which they can and do pursue independently of (and at times in conflict with) actors in society. Since the state controls the means of coercion, and given the dependence of many groups in civil society on the state for achieving any goals they may espouse, state personnel can to some extent impose their own preferences on civil society.

    States generally rely on a claim to some form of political legitimacy in order to maintain domination over their subjects.

    The rise of the modern-day state system was closely related to changes in political thought, especially concerning the changing understanding of legitimate state power and control. Early modern defenders of absolutism (Absolute monarchy), such as Thomas Hobbes and Jean Bodin undermined the doctrine of the divine right of kings by arguing that the power of kings should be justified by reference to the people. Hobbes in particular went further to argue that political power should be justified with reference to the individual (Hobbes wrote in the time of the English Civil War), not just to the people understood collectively. Both Hobbes and Bodin thought they were defending the power of kings, not advocating for democracy, but their arguments about the nature of sovereignty were fiercely resisted by more traditional defenders of the power of kings, such as Sir Robert Filmer in England, who thought that such defenses ultimately opened the way to more democratic claims.

    Max Weber identified three main sources of political legitimacy in his works. The first, legitimacy based on traditional grounds is derived from a belief that things should be as they have been in the past, and that those who defend these traditions have a legitimate claim to power. The second, legitimacy based on charismatic leadership, is devotion to a leader or group that is viewed as exceptionally heroic or virtuous. Max Weber’s concept of charisma is also explored by Fukuyama, who uses it to explain why individuals relinquish their personal freedoms and more egalitarian smaller communities in favor of larger, more authoritarian states. The Scholars goes further by saying that Charismatic leaders can leverage this mass mobilization as a military force, achieving victories and securing peace, which in turn further legitimizes their authority. Fukuyama cites the example of Muhammad, whose influence facilitated the rise of a powerful state in North Africa and the Middle East, despite limited economic foundations. The third is rational-legal authority, whereby legitimacy is derived from the belief that a certain group has been placed in power in a legal manner, and that their actions are justifiable according to a specific code of written laws. Weber believed that the modern state is characterized primarily by appeals to rational-legal authority.

    Some states are often labeled as “weak” or “failed”. In David Samuels’s words “…a failed state occurs when sovereignty over claimed territory has collapsed or was never effectively at all”. Authors like Samuels and Joel S. Migdal have explored the emergence of weak states, how they are different from Western “strong” states and its consequences to the economic development of developing countries.

    Samuels introduces the idea of state capacity, which he uses to refer to the ability of the state to fulfill its basic functions, such as providing security, maintaining law and order, and delivering public services. When a state does not accomplish this, state failure happens (Samuels, 2012). Other authors like Jeffrey Herbst add to this idea by arguing that state failure is the result of weak or non-existent institutions, which means that there is no state legitimacy because states are not able to provide goods or services or maintain order and safety (Herbst, 1990). However, there are also ideas that challenge this notion of state failure. Stephen D. Krasner argues that state failure is not just the result of weak institutions, but rather a very complex phenomenon that varies according to context-specific circumstances, and should therefore not be analyzed through a simplistic understanding like the one normally presented (Krasner, 2004).

    In “The Problem of Failed States”, Susan Rice argues that state failure is an important threat to global stability and security, since failed states are vulnerable to terrorism and conflict (Rice, 1994). Additionally, it is believed that state failure hinders democratic values, since these states often experience political violence, authoritarian rules, and a number of human rights abuses (Rotberg, 2004). While there is great discussion regarding the direct effects of state failure, its indirect effects should also be highlighted: state failure could lead to refugee flows and cross-border conflicts, while also becoming safe havens for criminal or extremist groups (Corbridge, 2005). In order to solve and prevent these issues in the future, it is necessary to focus on building strong institutions, promoting economic diversification and development, and addressing the causes of violence in each state (Mkandawire, 2001).

    To understand the formation of weak states, Samuels compares the formation of European states in the 1600s with the conditions under which more recent states were formed in the twentieth century. In this line of argument, the state allows a population to resolve a collective action problem, in which citizens recognize the authority of the state and exercise the power of coercion over them. This kind of social organization required a decline in the legitimacy of traditional forms of ruling (like religious authorities) and replaced them with an increase in the legitimacy of depersonalized rule; an increase in the central government’s sovereignty; and an increase in the organizational complexity of the central government (bureaucracy).

    The transition to this modern state was possible in Europe around 1600 thanks to the confluence of factors like the technological developments in warfare, which generated strong incentives to tax and consolidate central structures of governance to respond to external threats. This was complemented by the increase in the production of food (as a result of productivity improvements), which allowed to sustain a larger population and so increased the complexity and centralization of states. Finally, cultural changes challenged the authority of monarchies and paved the way for the emergence of modern states.

    The conditions that enabled the emergence of modern states in Europe were different for other countries that started this process later. As a result, many of these states lack effective capabilities to tax and extract revenue from their citizens, which derives in problems like corruption, tax evasion and low economic growth. Unlike the European case, late state formation occurred in a context of limited international conflict that diminished the incentives to tax and increase military spending. Also, many of these states emerged from colonization in a state of poverty and with institutions designed to extract natural resources, which have made more difficult to form states. European colonization also defined many arbitrary borders that mixed different cultural groups under the same national identities, which has made difficult to build states with legitimacy among all the population, since some states have to compete for it with other forms of political identity.

    As a complement to this argument, Migdal gives a historical account on how sudden social changes in the Third World during the Industrial Revolution contributed to the formation of weak states. The expansion of international trade that started around 1850, brought profound changes in Africa, Asia and Latin America that were introduced with the objective of assure the availability of raw materials for the European market. These changes consisted in: i) reforms to landownership laws with the objective of integrate more lands to the international economy, ii) increase in the taxation of peasants and little landowners, as well as collecting of these taxes in cash instead of in kind as was usual up to that moment and iii) the introduction of new and less costly modes of transportation, mainly railroads. As a result, the traditional forms of social control became obsolete, deteriorating the existing institutions and opening the way to the creation of new ones, that not necessarily lead these countries to build strong states. This fragmentation of the social order induced a political logic in which these states were captured to some extent by “strongmen”, who were capable to take advantage of the above-mentioned changes and that challenge the sovereignty of the state. As a result, these decentralization of social control impedes to consolidate strong states.

  • Customary law

    A legal custom is the established pattern of behavior within a particular social setting. A claim can be carried out in defense of “what has always been done and accepted by law”.

    Customary law (also, consuetudinary or unofficial law) exists where:

    a certain legal practice is observed and the relevant actors consider it to be an opinion of law or necessity (opinio juris). Most customary laws deal with standards of the community that have been long-established in a given locale. However, the term can also apply to areas of international law where certain standards have been nearly universal in their acceptance as correct bases of action – for example, laws against piracy or slavery (see hostis humani generis). In many, though not all instances, customary laws will have supportive court rulings and case law that have evolved over time to give additional weight to their rule as law and also to demonstrate the trajectory of evolution (if any) in the judicial interpretation of such law by relevant courts.

    A central issue regarding the recognition of custom is determining the appropriate methodology to know what practices and norms actually constitute customary law. It is not immediately clear that classic Western theories of jurisprudence can be reconciled in any useful way with conceptual analyses of customary law, and thus some scholars (like John Comaroff and Simon Roberts) have characterized customary law norms in their own terms. Yet, there clearly remains some disagreement, which is seen in John Hund’s critique of Comaroff and Roberts’ theory, and preference for the contributions of H. L. A. Hart. Hund argues that Hart’s The Concept of Law solves the conceptual problem with which scholars who have attempted to articulate how customary law principles may be identified, defined, and how they operate in regulating social behavior and resolving disputes.

    Customary law is the set of customs, practices and beliefs that are accepted as obligatory rules of conduct by a community.

    Comaroff and Roberts’ famous work, “Rules and Processes”, attempted to detail the body of norms that constitute Tswana law in a way that was less legalistic (or rule-oriented) than had Isaac Schapera. They defined “mekgwa le melao ya Setswana” in terms of Casalis and Ellenberger’s definition: melao therefore being rules pronounced by a chief and mekgwa as norms that become customary law through traditional usage. Importantly, however, they noted that the Tswana seldom attempt to classify the vast array of existing norms into categories and they thus termed this the ‘undifferentiated nature of the normative repertoire’. Moreover, they observe the co-existence of overtly incompatible norms that may breed conflict, either due to circumstances in a particular situation or inherently due to their incongruous content. This lack of rule classification and failure to eradicate internal inconsistencies between potentially conflicting norms allows for much flexibility in dispute settlement and is also viewed as a ‘strategic resource’ for disputants who seek to advance their own success in a case. The latter incongruities (especially inconsistencies of norm content) are typically solved by elevating one of the norms (tacitly) from ‘the literal to the symbolic. This allows for the accommodation of both as they now theoretically exist in different realms of reality. This is highly contextual, which further illustrates that norms cannot be viewed in isolation and are open to negotiation. Thus, although there are a small number of so-called non-negotiable norms, the vast majority are viewed and given substance contextually, which is seen as fundamental to the Tswana.

    Comaroff and Roberts describe how outcomes of specific cases have the ability to change the normative repertoire, as the repertoire of norms is seen to be both in a state of formation and transformation at all times. These changes are justified on the grounds that they are merely giving recognition to de facto observations of transformation. Furthermore, the legitimacy of a chief is a direct determinant of the legitimacy of his decisions. In the formulation of legislative pronouncements, as opposed to decisions made in dispute resolution, the chief first speaks of the proposed norm with his advisors, then council of headmen, then the public assembly debate the proposed law and may accept or reject it. A chief can proclaim the law even if the public assembly rejects it, but this is not often done; and, if the chief proclaims the legislation against the will of the public assembly, the legislation will become melao, however, it is unlikely that it will be executed because its effectiveness depends on the chief’s legitimacy and the norm’s consistency with the practices (and changes in social relations) and will of the people under that chief.

    Regarding the invocation of norms in disputes, Comaroff and Roberts used the term, “paradigm of argument”, to refer to the linguistic and conceptual frame used by a disputant, whereby ‘a coherent picture of relevant events and actions in terms of one or more implicit or explicit normative referents’ is created. In their explanation, the complainant (who always speaks first) thus establishes a paradigm the defendant can either accept and therefore argue within that specific paradigm or reject and therefore introduce his or her own paradigm (usually, the facts are not contested here). If the defendant means to change the paradigm, they will refer to norms as such, where actually norms are not ordinarily explicitly referenced in Tswana dispute resolution as the audience would typically already know them and just the way one presents one’s case and constructs the facts will establish one’s paradigm. The headman or chief adjudicating may also do same: accept the normative basis implied by the parties (or one of them), and thus not refer to norms using explicit language but rather isolate a factual issue in the dispute and then make a decision on it without expressly referring to any norms, or impose a new or different paradigm onto the parties.

    Hund finds Comaroff and Roberts’ flexibility thesis of a ‘repertoire of norms’ from which litigants and adjudicator choose in the process of negotiating solutions between them uncompelling. He is therefore concerned with disproving what he calls “rule scepticism” on their part. He notes that the concept of custom generally denotes convergent behaviour, but not all customs have the force of law. Hund therefore draws from Hart’s analysis distinguishing social rules, which have internal and external aspects, from habits, which have only external aspects. Internal aspects are the reflective attitude on the part of adherents toward certain behaviours perceived to be obligatory, according to a common standard. External aspects manifest in regular, observable behaviour, but is not obligatory. In Hart’s analysis, then, social rules amount to custom that has legal force.

    Hart identifies three further differences between habits and binding social rules. First, a social rule exists where society frowns on deviation from the habit and attempts to prevent departures by criticising such behaviour. Second, when this criticism is seen socially as a good reason for adhering to the habit, and it is welcomed. And, third, when members of a group behave in a common way not only out of habit or because everyone else is doing it, but because it is seen to be a common standard that should be followed, at least by some members. Hund, however, acknowledges the difficulty of an outsider knowing the dimensions of these criteria that depend on an internal point of view.

    For Hund, the first form of rule scepticism concerns the widely held opinion that, because the content of customary law derives from practice, there are actually no objective rules, since it is only behaviour that informs their construction. On this view, it is impossible to distinguish between behaviour that is rule bound and behaviour that is not—i.e., which behaviour is motivated by adherence to law (or at least done in recognition of the law) and is merely a response to other factors. Hund sees this as problematic because it makes quantifying the law almost impossible, since behaviour is obviously inconsistent. Hund argues that this is a misconception based on a failure to acknowledge the importance of the internal element. In his view, by using the criteria described above, there is not this problem in deciphering what constitutes “law” in a particular community.

    According to Hund, the second form of rule scepticism says that, though a community may have rules, those rules are not arrived at ‘deductively’, i.e. they are not created through legal/moral reasoning only but are instead driven by the personal/political motives of those who create them. The scope for such influence is created by the loose and undefined nature of customary law, which, Hund argues, grants customary-lawmakers (often through traditional ‘judicial processes’) a wide discretion in its application. Yet, Hund contends that the fact that rules might sometimes be arrived at in the more ad hoc way, does not mean that this defines the system. If one requires a perfect system, where laws are created only deductively, then one is left with a system with no rules. For Hund, this cannot be so and an explanation for these kinds of law-making processes is found in Hart’s conception of “secondary rules” (rules in terms of which the main body of norms are recognised). Hund therefore says that for some cultures, for instance in some sections of Tswana society, the secondary rules have developed only to the point where laws are determined with reference to politics and personal preference. This does not mean that they are not “rules”. Hund argues that if we acknowledge a developmental pattern in societies’ constructions of these secondary rules then we can understand how this society constructs its laws and how it differs from societies that have come to rely on an objective, stand-alone body of rules.

    The modern codification of civil law developed from the tradition of medieval custumals, collections of local customary law that developed in a specific manorial or borough jurisdiction, and which were slowly pieced together mainly from case law and later written down by local jurists. Custumals acquired the force of law when they became the undisputed rule by which certain rights, entitlements, and obligations were regulated between members of a community. Some examples include Bracton’s De Legibus et Consuetudinibus Angliae for England, the Coutume de Paris for the city of Paris, the Sachsenspiegel for northern Germany, and the many fueros of Spain.

    In international law, customary law refers to the Law of Nations or the legal norms that have developed through the customary exchanges between states over time, whether based on diplomacy or aggression. Essentially, legal obligations are believed to arise between states to carry out their affairs consistently with past accepted conduct. These customs can also change based on the acceptance or rejection by states of particular acts. Some principles of customary law have achieved the force of peremptory norms, which cannot be violated or altered except by a norm of comparable strength. These norms are said to gain their strength from universal acceptance, such as the prohibitions against genocide and slavery. Customary international law can be distinguished from treaty law, which consists of explicit agreements between nations to assume obligations. However, many treaties are attempts to codify pre-existing customary law.

    Customary law is a recognized source of law within jurisdictions of the civil law tradition, where it may be subordinate to both statutes and regulations. In addressing custom as a source of law within the civil law tradition, John Henry Merryman notes that, though the attention it is given in scholarly works is great, its importance is “slight and decreasing”. On the other hand, in many countries around the world, one or more types of customary law continue to exist side by side with official law, a condition referred to as legal pluralism (see also List of national legal systems).

    In the canon law of the Catholic Church, custom is a source of law. Canonical jurisprudence, however, differs from civil law jurisprudence in requiring the express or implied consent of the legislator for a custom to obtain the force of law.

    In the English common law, “long usage” must be established.

    It is a broad principle of property law that, if something has gone on for a long time without objection, whether it be using a right of way or occupying land to which one has no title, the law will eventually recognise the fact and give the person doing it the legal right to continue.

    It is known in case law as “customary rights”. Something which has been practised since time immemorial by reference to a particular locality may acquire the legal status of a custom, which is a form of local law. The legal criteria defining a custom are precise. The most common claim in recent times, is for customary rights to moor a vessel.

    The mooring must have been in continuous use for “time immemorial” which is defined by legal precedent as 12 years (or 20 years for Crown land) for the same purpose by people using them for that purpose. To give two examples: a custom of mooring which might have been established in past times for over two hundred years by the fishing fleet of local inhabitants of a coastal community will not simply transfer so as to benefit present day recreational boat owners who may hail from much further afield. Whereas a group of houseboats on a mooring that has been in continuous use for the last 25 years with a mixture of owner occupiers and rented houseboats, may clearly continue to be used by houseboats, where the owners live in the same town or city. Both the purpose of the moorings and the class of persons benefited by the custom must have been clear and consistent.

    In Canada, customary aboriginal law has a constitutional foundation and for this reason has increasing influence.

    In the Scandinavian countries customary law continues to exist and has great influence.

    Customary law is also used in some developing countries, usually used alongside common or civil law. For example, in Ethiopia, despite the adoption of legal codes based on civil law in the 1950s according to Dolores Donovan and Getachew Assefa there are more than 60 systems of customary law currently in force, “some of them operating quite independently of the formal state legal system”. They offer two reasons for the relative autonomy of these customary law systems: one is that the Ethiopian government lacks sufficient resources to enforce its legal system to every corner of Ethiopia; the other is that the Ethiopian government has made a commitment to preserve these customary systems within its boundaries.

    In 1995, President of Kyrgyzstan Askar Akaev announced a decree to revitalize the aqsaqal courts of village elders. The courts would have jurisdiction over property, torts and family law. The aqsaqal courts were eventually included under Article 92 of the Kyrgyz constitution. As of 2006, there were approximately 1,000 aqsaqal courts throughout Kyrgyzstan, including in the capital of Bishkek. Akaev linked the development of these courts to the rekindling of Kyrgyz national identity. In a 2005 speech, he connected the courts back to the country’s nomadic past and extolled how the courts expressed the Kyrgyz ability of self-governance. Similar aqsaqal courts exist, with varying levels of legal formality, in other countries of Central Asia.

    The Somali people in the Horn of Africa follow a customary law system referred to as xeer. It survives to a significant degree everywhere in Somalia and in the Somali communities in the Ogaden. Economist Peter Leeson attributes the increase in economic activity since the fall of the Siad Barre administration to the security in life, liberty and property provided by Xeer in large parts of Somalia. The Dutch attorney Michael van Notten also draws upon his experience as a legal expert in his comprehensive study on Xeer, The Law of the Somalis: A Stable Foundation for Economic Development in the Horn of Africa (2005).

    In India many customs are accepted by law. For example, Hindu marriage ceremonies are recognized by the Hindu Marriage Act.

    In Indonesia, customary adat laws of the country’s various indigenous ethnicities are recognized, and customary dispute resolution is recognized in Papua. Indonesian adat law are mainly divided into 19 circles, namely Aceh, Gayo, Alas, and Batak, Minangkabau, South Sumatra, the Malay regions, Bangka and Belitung, Kalimantan, Minahasa, Gorontalo, Toraja, South Sulawesi, Ternate, the Molluccas, Papua, Timor, Bali and Lombok, Central and East Java including the island of Madura, Sunda, and the Javanese monarchies, including the Yogyakarta Sultanate, Surakarta Sunanate, and the Pakualaman and Mangkunegaran princely states.

    In the Philippines, the Indigenous Peoples’ Rights Act of 1997 recognizes customary laws of indigenous peoples within their domain.

  • Rule of law

    The rule of law is a political and legal ideal that all people and institutions within a political body are accountable to the same laws, including lawmakers, government officials, and judges.It is sometimes stated simply as “no one is above the law” or “all are equal before the law”. According to Encyclopædia Britannica, it is defined as “the mechanism, process, institution, practice, or norm that supports the equality of all citizens before the law, secures a nonarbitrary form of government, and more generally prevents the arbitrary use of power.”

    Use of the phrase can be traced to 16th-century Britain. In the following century, Scottish theologian Samuel Rutherford employed it in arguing against the divine right of kings. John Locke wrote that freedom in society means being subject only to laws written by a legislature that apply to everyone, with a person being otherwise free from both governmental and private restrictions of liberty. The phrase “rule of law” was further popularized in the 19th century by British jurist A. V. Dicey. However, the principle, if not the phrase itself, was recognized by ancient thinkers. Aristotle wrote: “It is more proper that law should govern than any one of the citizens.”

    The term rule of law is closely related to constitutionalism as well as Rechtsstaat. It refers to a political situation, not to any specific legal rule.Distinct is the rule of man, where one person or group of persons rule arbitrarily.

    History

    Although credit for popularizing the expression “the rule of law” in modern times is usually given to A. V. Dicey, development of the legal concept can be traced through history to many ancient civilizations, including ancient Greece, Mesopotamia, India, and Rome.

    Early history (to 15th century)

    The earliest conception of rule of law can be traced back to the Indian epics Ramayana and Mahabharata – the earliest versions of which date around to 8th or 9th centuries BC.The Mahabharata deals with the concepts of Dharma (used to mean law and duty interchangeably), Rajdharma (duty of the king) and Dharmaraja. It states in one of its slokas that,”The people should execute a king who does not protect them, but deprives them of their property and assets and who takes no advice or guidance from any one. Such a king is not a king but misfortune.”

    Other sources for the philosophy of rule of law can be traced to the Upanishads which state that, “The law is the king of the kings. No one is higher than the law. Not even the king.” Other commentaries include Kautilya’s Arthashastra (4th-century BC), Manusmriti (dated to the 1st to 3rd century CE), Yajnavalkya-Smriti (dated between the 3rd and 5th century CE), Brihaspati Smriti (dated between 15 CE and 16 CE).

    Modern period (1500 CE – present)

    In 1481, during the reign of Ferdinand II of Aragon, the Constitució de l’Observança was approved by the General Court of Catalonia, establishing the submission of royal power (included its officers) to the laws of the Principality of Catalonia.

    In 1607, English Chief Justice Sir Edward Coke said in the Case of Prohibitions “that the law was the golden met-wand and measure to try the causes of the subjects; and which protected His Majesty in safety and peace: with which the King (James I) was greatly offended, and said, that then he should be under the law, which was treason to affirm, as he said; to which I said, that Bracton saith, quod Rex non debet esse sub homine, sed sub Deo et lege (that the King ought not to be under any man but under God and the law.).”

    Among the first modern authors to use the term and give the principle theoretical foundations was Samuel Rutherford in Lex, Rex (1644). The title, Latin for “the law is king”, subverts the traditional formulation rex lex (“the king is law”). James Harrington wrote in Oceana (1656), drawing principally on Aristotle’s Politics, that among forms of government an “Empire of Laws, and not of Men” was preferable to an “Empire of Men, and not of Laws”.

    John Locke also discussed this issue in his Second Treatise of Government (1690):

    The natural liberty of man is to be free from any superior power on earth, and not to be under the will or legislative authority of man, but to have only the law of nature for his rule. The liberty of man, in society, is to be under no other legislative power, but that established, by consent, in the commonwealth; nor under the dominion of any will, or restraint of any law, but what that legislative shall enact, according to the trust put in it. Freedom then is not what Sir Robert Filmer tells us, Observations, A. 55. a liberty for every one to do what he lists, to live as he pleases, and not to be tied by any laws: but freedom of men under government is, to have a standing rule to live by, common to every one of that society, and made by the legislative power erected in it; a liberty to follow my own will in all things, where the rule prescribes not; and not to be subject to the inconstant, uncertain, unknown, arbitrary will of another man: as freedom of nature is, to be under no other restraint but the law of nature.

    The principle was also discussed by Montesquieu in The Spirit of Law (1748). The phrase “rule of law” appears in Samuel Johnson’s Dictionary (1755).

    In 1776, the notion that no one is above the law was popular during the founding of the United States. For example, Thomas Paine wrote in his pamphlet Common Sense that “in America, the law is king. For as in absolute governments the King is law, so in free countries the law ought to be king; and there ought to be no other.” In 1780, John Adams enshrined this principle in Article VI of the Declaration of Rights in the Constitution of the Commonwealth of Massachusetts:

    No man, nor corporation, or association of men, have any other title to obtain advantages, or particular and exclusive privileges, distinct from those of the community, than what arises from the consideration of services rendered to the public; and this title being in nature neither hereditary, nor transmissible to children, or descendants, or relations by blood, the idea of a man born a magistrate, lawgiver, or judge, is absurd and unnatural.

    The term “rule of law” was popularised by British jurist A. V. Dicey, who viewed the rule of law in common law systems as comprising three principles. First, that government must follow the law that it makes; second, that no one is exempt from the operation of the law and that it applies equally to all; and third, that general rights emerge from particular cases decided by the courts.

    The influence of Britain, France and the United States contributed to spreading the principle of the rule of law to other countries around the world.

    Legal theory and philosophy

    The Oxford English Dictionary has defined rule of law as:

    The authority and influence of law in society, esp. when viewed as a constraint on individual and institutional behaviour; (hence) the principle whereby all members of a society (including those in government) are considered equally subject to publicly disclosed legal codes and processes.

    Despite wide use by politicians, judges and academics, the rule of law has been described as “an exceedingly elusive notion”. In modern legal theory, there are at least two principal conceptions of the rule of law: a formalist or “thin” definition, and a substantive or “thick” definition. Formalist definitions of the rule of law do not make a judgment about the justness of law itself, but define specific procedural attributes that a legal framework must have in order to be in compliance with the rule of law. Substantive conceptions of the rule of law, generally from more recent authors, go beyond this and include certain substantive rights that are said to be based on, or derived from, the rule of law. One occasionally encounters a third “functional” conception.

    The functional interpretation of the term rule of law contrasts the rule of law with the rule of man. According to the functional view, a society in which government officers have a great deal of discretion has a low degree of “rule of law”, whereas a society in which government officers have little discretion has a high degree of “rule of law”. Upholding the rule of law can sometimes require the punishment of those who commit offenses that are justifiable under natural law but not statutory law.[55] The rule of law is thus somewhat at odds with flexibility, even when flexibility may be preferable.

    Social science analyses

    Economics

    Economists and lawyers have studied and analysed the rule of law’s impact on economic development. In particular, a major question in the area of law and economics is whether the rule of law matters to economic development, particularly in developing nations. The economist F. A. Hayek analyzed how the rule of law might be beneficial to the free market. Hayek proposed that under the rule of law, individuals would be able to make wise investments and future plans with some confidence in a successful return on investment when he stated: “under the Rule of Law the government is prevented from stultifying individual efforts by ad hoc action. Within the known rules of the game the individual is free to pursue his personal ends and desires, certain that the powers of government will not be used deliberately to frustrate his efforts.”

    Studies have shown that weak rule of law (for example, discretionary regulatory enforcement) discourages investment. Economists have found, for example, that a rise in discretionary regulatory enforcement caused US firms to abandon international investments.

    Constitutional economics is the study of the compatibility of economic and financial decisions within existing constitutional law frameworks. Aspects of constitutional frameworks relevant to both the rule of law and public economics include government spending on the judiciary, which, in many transitional and developing countries, is completely controlled by the executive. Additionally, judicial corruption may arise from both the executive branch and private actors. Standards of constitutional economics such as transparency can also be used during annual budget processes for the benefit of the rule of law. Further, the availability of an effective court system in situations of unfair government spending and executive impoundment of previously authorized appropriations is a key element for the success of the rule of law.

    Nobel laureates (2024) Daron Acemoglu and James A. Robinson emphasize the importance of the rule of law in their book Why Nations Fail. They argue that the rule of law ensures that laws apply equally to everyone, including elites and government officials. This principle is crucial for promoting inclusive institutions, which are key to sustained economic growth and prosperity.

    The authors highlight historical examples, such as the French Revolution, where the rule of law helped dismantle absolutism and feudal privileges, paving the way for inclusive institutions. They also discuss how pluralistic political institutions are essential for the rule of law to thrive, as they create broad coalitions that support fairness and equality.

    Comparative approaches

    The term “rule of law” has been used primarily in the English-speaking countries, and it is not yet fully clarified with regard to such well-established democracies such as Sweden, Denmark, France, Germany, or Japan. A common language between lawyers of common law and civil law countries is critically important for research of links between the rule of law and real economy.

    The rule of law can be hampered when there is a disconnect between legal and popular consensus. For example, under the auspices of the World Intellectual Property Organization, nominally strong copyright laws have been implemented throughout most of the world; but because the attitude of much of the population does not conform to these laws, a rebellion against ownership rights has manifested in rampant piracy, including an increase in peer-to-peer file sharing. Similarly, in Russia, tax evasion is common and a person who admits he does not pay taxes is not judged or criticized by his colleagues and friends, because the tax system is viewed as unreasonable. Bribery likewise has different normative implications across cultures.

    Education

    UNESCO has argued that education has an important role in promoting the rule of law and a culture of lawfulness, providing an important protective function by strengthening learners’ abilities to face and overcome difficult life situations. Young people can be important contributors to a culture of lawfulness, and governments can provide educational support that nurtures positive values and attitudes in future generations. A movement towards education for justice seeks to promote the rule of law in schools.

    Political Science

    Francis Fukuyama in his book The origins of political order puts The Rule of Law as a requirement for stability.

    Status in various jurisdictions

    The rule of law has been considered one of the key dimensions that determine the quality and good governance of a country. Research, like the Worldwide Governance Indicators, defines the rule of law as “the extent to which agents have confidence and abide by the rules of society, and in particular the quality of contract enforcement, the police and the courts, as well as the likelihood of crime or violence.” Based on this definition the Worldwide Governance Indicators project has developed aggregate measurements for the rule of law in more than 200 countries, as seen in the map at right. Other evaluations such as the World Justice Project Rule of Law Index show that adherence to rule of law fell in 61% of countries in 2022. Globally, this means that 4.4 billion people live in countries where rule of law declined in 2021.

    United States

    All government officers of the United States, including the President, Justices of the Supreme Court, state judges and legislators, and all members of Congress, pledge first and foremost to uphold the Constitution, affirming that the rule of law is superior to the rule of any human leader. At the same time, the federal government has considerable discretion: the legislative branch is free to decide what statutes it will write, as long as it stays within its enumerated powers and respects the constitutionally protected rights of individuals. Likewise, the judicial branch has a degree of judicial discretion, and the executive branch also has various discretionary powers including prosecutorial discretion.

    James Wilson said during the Philadelphia Convention in 1787 that, “Laws may be unjust, may be unwise, may be dangerous, may be destructive; and yet not be so unconstitutional as to justify the Judges in refusing to give them effect.” George Mason agreed that judges “could declare an unconstitutional law void. But with regard to every law, however unjust, oppressive or pernicious, which did not come plainly under this description, they would be under the necessity as judges to give it a free course.” Chief Justice John Marshall a similar position in 1827: “When its existence as law is denied, that existence cannot be proved by showing what are the qualities of a law.”

    Scholars continue to debate whether the U.S. Constitution adopted a particular interpretation of the “rule of law”, and if so, which one. For example, John Harrison asserts that the word “law” in the Constitution is simply defined as that which is legally binding, rather than being “defined by formal or substantive criteria”, and therefore judges do not have discretion to decide that laws fail to satisfy such unwritten and vague criteria. Law professor Frederick Mark Gedicks disagrees, writing that Cicero, Augustine, Thomas Aquinas, and the framers of the U.S. Constitution believed that “an unjust law was not really a law at all”.

    Some modern scholars contend that the rule of law has been corroded during the past century by the instrumental view of law promoted by legal realists such as Oliver Wendell Holmes and Roscoe Pound. For example, Brian Tamanaha asserts: “The rule of law is a centuries-old ideal, but the notion that law is a means to an end became entrenched only in the course of the nineteenth and twentieth centuries.”

    Others argue that the rule of law has survived but was transformed to allow for the exercise of discretion by administrators. For much of American history, the dominant notion of the rule of law in administrative law has been some version of Dicey’s, that is, individuals should be able to challenge an administrative order by bringing suit in a court of general jurisdiction. The increased number of administrative cases led to fears that excess judicial oversight over administrative decisions would overwhelm the courts and destroy the advantages of specialization that led to the creation of administrative agencies in the first place. By 1941, a compromise had emerged. If administrators adopted procedures that more or less tracked “the ordinary legal manner” of the courts, further review of the facts by “the ordinary Courts of the land” was unnecessary. Thus Dicey’s rule of law was recast into a purely procedural form.

    On July 1, 2024, in Trump v. United States, the Supreme Court held that presidents have absolute immunity for acts committed as president within their core constitutional purview, at least presumptive immunity for official acts within the outer perimeter of their official responsibility, and no immunity for unofficial acts.Legal scholars have warned of the negative impact of this decision on the status of rule of law in the United States. Prior to that, in 1973 and 2000 the Office of Legal Counsel within the Department of Justice issued opinions saying that a sitting president cannot be indicted or prosecuted,but it is constitutional to indict and try a former president for the same offenses for which the President was impeached by the House of Representatives and acquitted by the Senate under the Impeachment Disqualification Clause of Article I, Section III.

    Numerous definitions of “rule of law” are used in United States governmental bodies. An organization’s definition might depend on that organization’s goal. For instance, military occupation or counterinsurgency campaigns may necessitate prioritising physical security over human rights. U.S. Army doctrine and U.S. Government (USG) interagency agreements might see the rule of law as a principle of governance: Outlines of different definitions are given in a JAG Corps handbook for judge advocates deployed with the US Army.

    Europe

    The preamble of the rule of law European Convention for the Protection of Human Rights and Fundamental Freedoms says “the governments of European countries which are like-minded and have a common heritage of political traditions, ideals, freedom and the rule of law”.

    In France and Germany the concepts of rule of law (Etat de droit and Rechtsstaat respectively) are analogous to the principles of constitutional supremacy and protection of fundamental rights from public authorities, particularly the legislature. France was one of the early pioneers of the ideas of the rule of law. The German interpretation is more rigid but similar to that of France and the United Kingdom.

    United Kingdom

    Main article: Rule of law in the United Kingdom
    See also: History of the constitution of the United Kingdom
    In the United Kingdom the rule of law is a long-standing principle of the way the country is governed, dating from England’s Magna Carta in 1215 and the Bill of Rights 1689. In the 19th century classic work Introduction to the Study of the Law of the Constitution (1885), A. V. Dicey, a constitutional scholar and lawyer, wrote of the twin pillars of the British constitution: the rule of law and parliamentary sovereignty.

    Asia

    East Asian cultures are influenced by two schools of thought, Confucianism, which advocated good governance as rule by leaders who are benevolent and virtuous, and legalism, which advocated strict adherence to law. The influence of one school of thought over the other has varied throughout the centuries. One study indicates that throughout East Asia, only South Korea, Singapore, Japan, Taiwan and Hong Kong have societies that are robustly committed to a law-bound state. According to Awzar Thi, a member of the Asian Human Rights Commission, the rule of law in Cambodia and most of Asia is weak or nonexistent:

    Apart from a number of states and territories, across the continent there is a huge gulf between the rule of law rhetoric and reality. In Thailand, the police force is favor over the rich and corrupted. In Cambodia, judges are proxies for the ruling political party … That a judge may harbor political prejudice or apply the law unevenly are the smallest worries for an ordinary criminal defendant in Asia. More likely ones are: Will the police fabricate the evidence? Will the prosecutor bother to show up? Will the judge fall asleep? Will I be poisoned in prison? Will my case be completed within a decade?

    In countries such as China and Vietnam, the transition to a market economy has been a major factor in a move toward the rule of law, because the rule of law is important to foreign investors and to economic development. It remains unclear whether the rule of law in countries like China and Vietnam will be limited to commercial matters or will spill into other areas as well, and if so whether that spillover will enhance prospects for related values such as democracy and human rights.

  • International law

    International law, also known as public international law and the law of nations, is the set of rules, norms, legal customs and standards that states and other actors feel an obligation to, and generally do, obey in their mutual relations. In international relations, actors are simply the individuals and collective entities, such as states, international organizations, and non-state groups, which can make behavioral choices, whether lawful or unlawful. Rules are formal, typically written expectations that outline required behavior, while norms are informal, often unwritten guidelines about appropriate behavior that are shaped by custom and social practice. It establishes norms for states across a broad range of domains, including war and diplomacy, economic relations, and human rights.

    International law differs from state-based domestic legal systems in that it operates largely through consent, since there is no universally accepted authority to enforce it upon sovereign states. States and non-state actors may choose to not abide by international law, and even to breach a treaty, but such violations, particularly of peremptory norms, can be met with disapproval by others and in some cases coercive action including diplomacy, economic sanctions, and war.

    The sources of international law include international custom (general state practice accepted as law), treaties, and general principles of law recognised by most national legal systems. Although international law may also be reflected in international comity—the practices adopted by states to maintain good relations and mutual recognition—such traditions are not legally binding. The relationship and interaction between a national legal system and international law is complex and variable. National law may become international law when treaties permit national jurisdiction to supranational tribunals such as the European Court of Human Rights or the International Criminal Court. Treaties such as the Geneva Conventions require national law to conform to treaty provisions. National laws or constitutions may also provide for the implementation or integration of international legal obligations into domestic law.

    The modern term “international law” was originally coined by Jeremy Bentham in his 1789 book Introduction to the Principles of Morals and Legislation to replace the older law of nations, a direct translation of the late medieval concepts of ius gentium, used by Hugo Grotius, and droits des gens, used by Emer de Vattel. The definition of international law has been debated; Bentham referred specifically to relationships between states which has been criticised for its narrow scope. Lassa Oppenheim defined it in his treatise as “a law between sovereign and equal states based on the common consent of these states” and this definition has been largely adopted by international legal scholars.

    There is a distinction between public and private international law; the latter is concerned with whether national courts can claim jurisdiction over cases with a foreign element and the application of foreign judgments in domestic law, whereas public international law covers rules with an international origin. The difference between the two areas of law has been debated as scholars disagree about the nature of their relationship. Joseph Story, who originated the term “private international law”, emphasised that it must be governed by the principles of public international law but other academics view them as separate bodies of law. Another term, transnational law, is sometimes used to refer to a body of both national and international rules that transcend the nation state, although some academics emphasise that it is distinct from either type of law. It was defined by Philip Jessup as “all law which regulates actions or events that transcend national frontiers”.

    A more recent concept is supranational law, which was described in a 1969 paper as ” relatively new word in the vocabulary of politics”. Systems of supranational law arise when nations explicitly cede their right to make decisions to this system’s judiciary and legislature, which then have the right to make laws that are directly effective in each member state. This has been described as “a level of international integration beyond mere intergovernmentalism yet still short of a federal system”. The most common example of a supranational system is the European Union.

    With origins tracing back to antiquity, states have a long history of negotiating interstate agreements. An initial framework was conceptualised by the Ancient Romans and this idea of ius gentium has been used by various academics to establish the modern concept of international law. Among the earliest recorded examples are peace treaties between the Mesopotamian city-states of Lagash and Umma (approximately 3100 BCE), and an agreement between the Egyptian pharaoh, Ramesses II, and the Hittite king, Ḫattušili III, concluded in 1279 BCE. Interstate pacts and agreements were negotiated and agreed upon by polities across the world, from the eastern Mediterranean to East Asia. In Ancient Greece, many early peace treaties were negotiated between its city-states and, occasionally, with neighbouring states. The Roman Empire established an early conceptual framework for international law, jus gentium, which governed the status of foreigners living in Rome and relations between foreigners and Roman citizens. Adopting the Greek concept of natural law, the Romans conceived of jus gentium as being universal. However, in contrast to modern international law, the Roman law of nations applied to relations with and between foreign individuals rather than among political units such as states.

    Beginning with the Spring and Autumn period of the eighth century BCE, China was divided into numerous states that were often at war with each other. Rules for diplomacy and treaty-making emerged, including notions regarding just grounds for war, the rights of neutral parties, and the consolidation and partition of states; these concepts were sometimes applied to relations with barbarians along China’s western periphery beyond the Central Plains. The subsequent Warring States period saw the development of two major schools of thought, Confucianism and Legalism, both of which held that the domestic and international legal spheres were closely interlinked, and sought to establish competing normative principles to guide foreign relations. Similarly, the Indian subcontinent was divided into various states, which over time developed rules of neutrality, treaty law, and international conduct, and established both temporary and permanent embassies.

    Following the collapse of the western Roman Empire in the fifth century CE, Europe fragmented into numerous often-warring states for much of the next five centuries. Political power was dispersed across a range of entities, including the Church, mercantile city-states, and kingdoms, most of which had overlapping and ever-changing jurisdictions. As in China and India, these divisions prompted the development of rules aimed at providing stable and predictable relations. Early examples include canon law, which governed ecclesiastical institutions and clergy throughout Europe; the lex mercatoria (“merchant law”), which concerned trade and commerce; and various codes of maritime law, such as the Rolls of Oléron— aimed at regulating shipping in North-western Europe — and the later Laws of Wisby, enacted among the commercial Hanseatic League of northern Europe and the Baltic region.

    In the Islamic world, Muhammad al-Shaybani published Al-Siyar Al-Kabīr in the eighth century, which served as a fundamental reference work for siyar, a subset of Sharia law, which governed foreign relations. This was based on the division of the world into three categories: the dar al-Islam, where Islamic law prevailed; the dar al-sulh, non-Islamic realms that concluded an armistice with a Muslim government; and the dar al-harb, non-Islamic lands which were contested through jihad. Islamic legal principles concerning military conduct served as precursors to modern international humanitarian law and institutionalised limitations on military conduct, including guidelines for commencing war, distinguishing between civilians and combatants and caring for the sick and wounded.

    During the European Middle Ages, international law was concerned primarily with the purpose and legitimacy of war, seeking to determine what constituted “just war”. The Greco-Roman concept of natural law was combined with religious principles by Jewish philosopher Maimonides (1135–1204) and Christian theologian Thomas Aquinas (1225–1274) to create the new discipline of the “law of nations”, which unlike its eponymous Roman predecessor, applied natural law to relations between states. In Islam, a similar framework was developed wherein the law of nations was derived, in part, from the principles and rules set forth in treaties with non-Muslims.

    The 15th century witnessed a confluence of factors that contributed to an accelerated development of international law. Italian jurist Bartolus de Saxoferrato (1313–1357) was considered the founder of private international law. Another Italian jurist, Baldus de Ubaldis (1327–1400), provided commentaries and compilations of Roman, ecclesiastical, and feudal law, creating an organised source of law that could be referenced by different nations. Alberico Gentili (1552–1608) took a secular view to international law, authoring various books on issues in international law, notably Law of War, which provided comprehensive commentary on the laws of war and treaties. Francisco de Vitoria (1486–1546), who was concerned with the treatment of indigenous peoples by Spain, invoked the law of nations as a basis for their innate dignity and rights, articulating an early version of sovereign equality between peoples. Francisco Suárez (1548–1617) emphasised that international law was founded upon natural law and human positive law.

    Dutch jurist Hugo Grotius (1583–1645) is widely regarded as the father of international law, being one of the first scholars to articulate an international order that consists of a “society of states” governed not by force or warfare but by actual laws, mutual agreements, and customs. Grotius secularised international law; his 1625 work, De Jure Belli ac Pacis, laid down a system of principles of natural law that bind all nations regardless of local custom or law. He inspired two nascent schools of international law, the naturalists and the positivists. In the former camp was German jurist Samuel von Pufendorf (1632–1694), who stressed the supremacy of the law of nature over states. His 1672 work, Of the Law of Nature and Nations, expanded on the theories of Grotius and grounded natural law to reason and the secular world, asserting that it regulated only external acts of states. Pufendorf challenged the Hobbesian notion that the state of nature was one of war and conflict, arguing that the natural state of the world is actually peaceful but weak and uncertain without adherence to the law of nations. The actions of a state consist of nothing more than the sum of the individuals within that state, thereby requiring the state to apply a fundamental law of reason, which is the basis of natural law. He was among the earliest scholars to expand international law beyond European Christian nations, advocating for its application and recognition among all peoples on the basis of shared humanity.

    In contrast, positivist writers, such as Richard Zouche (1590–1661) in England and Cornelis van Bynkershoek (1673–1743) in the Netherlands, argued that international law should derive from the actual practice of states rather than Christian or Greco-Roman sources. The study of international law shifted away from its core concern on the law of war and towards the domains such as the law of the sea and commercial treaties. The positivist school grew more popular as it reflected accepted views of state sovereignty and was consistent with the empiricist approach to philosophy that was then gaining acceptance in Europe.

    The developments of the 17th century culminated at the conclusion of the Peace of Westphalia in 1648, which is considered the seminal event in international law. The resulting Westphalian sovereignty is said to have established the current international legal order characterised by independent nation states, which have equal sovereignty regardless of their size and power, defined primarily by non-interference in the domestic affairs of sovereign states, although historians have challenged this narrative. The idea of nationalism further solidified the concept and formation of nation-states. Elements of the naturalist and positivist schools were synthesised, notably by German philosopher Christian Wolff (1679–1754) and Swiss jurist Emer de Vattel (1714–1767), both of whom sought a middle-ground approach. During the 18th century, the positivist tradition gained broader acceptance, although the concept of natural rights remained influential in international politics, particularly through the republican revolutions of the United States and France.

    Until the mid-19th century, relations between states were dictated mostly by treaties, agreements between states to behave in a certain way, unenforceable except by force, and nonbinding except as matters of honour and faithfulness. One of the first instruments of modern armed conflict law was the Lieber Code of 1863, which governed the conduct of warfare during the American Civil War, and is noted for codifying rules and articles of war adhered to by nations across the world, including the United Kingdom, Prussia, Serbia and Argentina. In the years that followed, numerous other treaties and bodies were created to regulate the conduct of states towards one another, including the Permanent Court of Arbitration in 1899, and the Hague and Geneva Conventions, the first of which was passed in 1864.

    Colonial expansion by European powers reached its peak in the late 19th century and its influence began to wane following the unprecedented bloodshed of World War I, which spurred the creation of international organisations. Right of conquest was generally recognized as international law before World War II. The League of Nations was founded to safeguard peace and security. International law began to incorporate notions such as self-determination and human rights. The United Nations (UN) was established in 1945 to replace the League, with an aim of maintaining collective security. A more robust international legal order followed, buttressed by institutions such as the International Court of Justice (ICJ) and the UN Security Council (UNSC). The International Law Commission (ILC) was established in 1947 to develop and codify international law.

    In the 1940s through the 1970s, the dissolution of the Soviet bloc and decolonisation across the world resulted in the establishment of scores of newly independent states. As these former colonies became their own states, they adopted European views of international law. A flurry of institutions, ranging from the International Monetary Fund (IMF) and the International Bank for Reconstruction and Development (World Bank) to the World Health Organization furthered the development of a multilateralist approach as states chose to compromise on sovereignty to benefit from international cooperation. Since the 1980s, there has been an increasing focus on the phenomenon of globalisation and on protecting human rights on the global scale, particularly when minorities or indigenous communities are involved, as concerns are raised that globalisation may be increasing inequality in the international legal system.

    The sources of international law applied by the community of nations are listed in Article 38(1) of the Statute of the International Court of Justice, which is considered authoritative in this regard. These categories are, in order, international treaties, customary international law, general legal principles and judicial decisions and the teachings of prominent legal scholars as “a subsidiary means for the determination of rules of law”. It was originally considered that the arrangement of the sources sequentially would suggest an implicit hierarchy of sources; however, the statute does not provide for a hierarchy and other academics have argued that therefore the sources must be equivalent.

    General principles of law have been defined in the Statute as “general principles of law recognized by civilized nations” but there is no academic consensus about what is included within this scope. They are considered to be derived from both national and international legal systems, although including the latter category has led to debate about potential cross-over with international customary law. The relationship of general principles to treaties or custom has generally been considered to be “fill[ing] the gaps” although there is still no conclusion about their exact relationship in the absence of a hierarchy.

    A treaty is defined in Article 2 of the Vienna Convention on the Law of Treaties (VCLT) as “an international agreement concluded between States in written form and governed by international law, whether embodied in a single instrument or in two or more related instruments and whatever its particular designation”. The definition specifies that the parties must be states, however international organisations are also considered to have the capacity to enter treaties. Treaties are binding through the principle of pacta sunt servanda, which allows states to create legal obligations on themselves through consent. The treaty must be governed by international law; however it will likely be interpreted by national courts. The VCLT, which codifies several bedrock principles of treaty interpretation, holds that a treaty “shall be interpreted in good faith in accordance with the ordinary meaning to be given to the terms of the treaty in their context and in the light of its object and purpose”. This represents a compromise between three theories of interpretation: the textual approach which looks to the ordinary meaning of the text, the subjective approach which considers factors such as the drafters’ intention, and the teleological approach which interprets a treaty according to its objective and purpose.

    A state must express its consent to be bound by a treaty through signature, exchange of instruments, ratification, acceptance, approval or accession. Accession refers to a state choosing to become party to a treaty that it is unable to sign, such as when establishing a regional body. Where a treaty states that it will be enacted through ratification, acceptance or approval, the parties must sign to indicate acceptance of the wording but there is no requirement on a state to later ratify the treaty, although they may still be subject to certain obligations. When signing or ratifying a treaty, a state can make a unilateral statement to negate or amend certain legal provisions which can have one of three effects: the reserving state is bound by the treaty but the effects of the relevant provisions are precluded or changes, the reserving state is bound by the treaty but not the relevant provisions, or the reserving state is not bound by the treaty. An interpretive declaration is a separate process, where a state issues a unilateral statement to specify or clarify a treaty provision. This can affect the interpretation of the treaty but it is generally not legally binding. A state is also able to issue a conditional declaration stating that it will consent to a given treaty only on the condition of a particular provision or interpretation.

    Article 54 of the VCLT provides that either party may terminate or withdraw from a treaty in accordance with its terms or at any time with the consent of the other party, with ‘termination’ applying to a bilateral treaty and ‘withdrawal’ applying to a multilateral treaty. Where a treaty does not have provisions allowing for termination or withdrawal, such as the Genocide Convention, it is prohibited unless the right was implied into the treaty or the parties had intended to allow for it. A treaty can also be held invalid, including where parties act ultra vires or negligently, where execution has been obtained through fraudulent, corrupt or forceful means, or where the treaty contradicts peremptory norms.

    Customary international law requires two elements: a consistent practice of states and the conviction of those states that the consistent practice is required by a legal obligation, referred to as opinio juris. Custom distinguishes itself from treaty law as it is binding on all states, regardless of whether they have participated in the practice, with the exception of states who have been persistent objectors during the process of the custom being formed and special or local forms of customary law. The requirement for state practice relates to the practice, either through action or failure to act, of states in relation to other states or international organisations. There is no legal requirement for state practice to be uniform or for the practice to be long-running, although the ICJ has set a high bar for enforcement in the cases of Anglo-Norwegian Fisheries and North Sea Continental Shelf. There has been legal debate on this topic with the only prominent view on the length of time necessary to establish custom explained by Humphrey Waldock as varying “according to the nature of the case”. The practice is not required to be followed universally by states, but there must be a “general recognition” by states “whose interests are specially affected”.

    The second element of the test, opinio juris, the belief of a party that a particular action is required by the law is referred to as the subjective element. The ICJ has stated in dictum in North Sea Continental Shelf that, “Not only must the acts concerned amount to a settled practice, but they must also be such, or be carried out in such a way, as to be evidence of a belief that this practice is rendered obligatory by the existence of a rule of law requiring it”. A committee of the International Law Association has argued that there is a general presumption of an opinio juris where state practice is proven but it may be necessary if the practice suggests that the states did not believe it was creating a precedent.The test in these circumstances is whether opinio juris can be proven by the states’ failure to protest. Other academics believe that intention to create customary law can be shown by states including the principle in multiple bilateral and multilateral treaties, so that treaty law is necessary to form customs.

    The adoption of the VCLT in 1969 established the concept of jus cogens, or peremptory norms, which are “a norm accepted and recognized by the international community of States as a whole as a norm from which no derogation is permitted and which can be modified only by a subsequent norm of general international law having the same character”. Where customary or treaty law conflicts with a peremptory norm, it will be considered invalid, but there is no agreed definition of jus cogens. Academics have debated what principles are considered peremptory norms but the mostly widely agreed is the principle of non-use of force. The next year, the ICJ defined erga omnes obligations as those owed to “the international community as a whole”, which included the illegality of genocide and human rights.

    There are generally two approaches to the relationship between international and national law, namely monism and dualism. Monism assumes that international and national law are part of the same legal order. Therefore, a treaty can directly become part of national law without the need for enacting legislation, although they will generally need to be approved by the legislature. Once approved, the content of the treaty is considered as a law that has a higher status than national laws. Examples of countries with a monism approach are France and the Netherlands. The dualism approach considers that national and international law are two separate legal orders, so treaties are not granted a special status. The rules in a treaty can only be considered national law if the contents of the treaty have been enacted first. An example is the United Kingdom; after the country ratified the European Convention on Human Rights, the convention was only considered to have the force of law in national law after Parliament passed the Human Rights Act 1998.

    In practice, the division of countries between monism and dualism is often more complicated; countries following both approaches may accept peremptory norms as being automatically binding and they may approach treaties, particularly later amendments or clarifications, differently than they would approach customary law. Many countries with older or unwritten constitutions do not have explicit provision for international law in their domestic system and there has been an upswing in support for monism principles in relation to human rights and humanitarian law, as most principles governing these concepts can be found in international law.

    A state is defined under Article 1 of the Montevideo Convention on the Rights and Duties of States as a legal person with a permanent population, a defined territory, government and capacity to enter relations with other states. There is no requirement on population size, allowing micro-states such as San Marino and Monaco to be admitted to the UN, and no requirement of fully defined boundaries, allowing Israel to be admitted despite border disputes. There was originally an intention that a state must have self-determination, but now the requirement is for a stable political environment. The final requirement of being able to enter relations is commonly evidenced by independence and sovereignty.

    Under the principle of par in parem non habet imperium, all states are sovereign and equal, but state recognition often plays a significant role in political conceptions. A country may recognise another nation as a state and, separately, it may recognise that nation’s government as being legitimate and capable of representing the state on the international stage. There are two theories on recognition; the declaratory theory sees recognition as commenting on a current state of law which has been separately satisfied whereas the constitutive theory states that recognition by other states determines whether a state can be considered to have legal personality. States can be recognised explicitly through a released statement or tacitly through conducting official relations, although some countries have formally interacted without conferring recognition.

    Throughout the 19th century and the majority of the 20th century, states were protected by absolute immunity, so they could not face criminal prosecution for any actions. However a number of countries began to distinguish between acta jure gestionis, commercial actions, and acta jure imperii, government actions; the restrictive theory of immunity said states were immune where they were acting in a governmental capacity but not a commercial one. The European Convention on State Immunity in 1972 and the UN Convention on Jurisdictional Immunities of States and their Property attempt to restrict immunity in accordance with customary law.

    Historically individuals have not been seen as entities in international law, as the focus was on the relationship between states. As human rights have become more important on the global stage, being codified by the UN General Assembly (UNGA) in the Universal Declaration of Human Rights in 1948, individuals have been given the power to defend their rights to judicial bodies. International law is largely silent on the issue of nationality law with the exception of cases of dual nationality or where someone is claiming rights under refugee law but as, argued by the political theorist Hannah Arendt, human rights are often tied to someone’s nationality. The European Court of Human Rights allows individuals to petition the court where their rights have been violated and national courts have not intervened and the Inter-American Court of Human Rights and the African Court on Human and Peoples’ Rights have similar powers.

    Traditionally, sovereign states and the Holy See were the sole subjects of international law. With the proliferation of international organisations over the last century, they have also been recognised as relevant parties. One definition of international organisations comes from the ILC’s 2011 Draft Articles on the Responsibility of International Organizations which in Article 2(a) states that it is “an organization established by treaty or other instrument governed by international law and possessing its own international legal personality”. This definition functions as a starting point but does not recognise that organisations can have no separate personality but nevertheless function as an international organisation. The UN Economic and Social Council has emphasised a split between inter-government organisations (IGOs), which are created by inter-governmental agreements, and international non-governmental organisations (INGOs). All international organisations have members; generally this is restricted to states, although it can include other international organisations. Sometimes non-members will be allowed to participate in meetings as observers.

    The Yearbook of International Organizations sets out a list of international organisations, which include the UN, the WTO, the World Bank and the IMF. Generally organisations consist of a plenary organ, where member states can be represented and heard; an executive organ, to decide matters within the competence of the organisation; and an administrative organ, to execute the decisions of the other organs and handle secretarial duties. International organisations will typically provide for their privileges and immunity in relation to its member states in their constitutional documents or in multilateral agreements, such as the Convention on the Privileges and Immunities of the United Nations. These organisations also have the power to enter treaties, using the Vienna Convention on the Law of Treaties between States and International Organizations or between International Organizations as a basis although it is not yet in force. They may also have the right to bring legal claims against states depending, as set out in Reparation for Injuries, where they have legal personality and the right to do so in their constitution.

    United Nations

    The UNSC has the power under Chapter VII of the UN Charter to take decisive and binding actions against states committing “a threat to the peace, breach of the peace, or an act of aggression” for collective security although prior to 1990, it has only intervened once, in the case of Korea in 1950. This power can only be exercised, however, where a majority of member states vote for it, as well as receiving the support of the permanent five members of the UNSC. This can be followed up with economic sanctions, military action, and similar uses of force. The UNSC also has a wide discretion under Article 24, which grants “primary responsibility” for issues of international peace and security. The UNGA, concerned during the Cold War with the requirement that the USSR would have to authorise any UNSC action, adopted the “Uniting for Peace” resolution of 3 November 1950, which allowed the organ to pass recommendations to authorize the use of force. This resolution also led to the practice of UN peacekeeping, which has been notably been used in East Timor and Kosovo.

    International courts

    There are more than one hundred international courts in the global community, although states have generally been reluctant to allow their sovereignty to be limited in this way. The first known international court was the Central American Court of Justice, prior to World War I, when the Permanent Court of International Justice (PCIJ) was established. The PCIJ was replaced by the ICJ, which is the best known international court due to its universal scope in relation to geographical jurisdiction and subject matter. There are additionally a number of regional courts, including the Court of Justice of the European Union, the EFTA Court and the Court of Justice of the Andean Community. Interstate arbitration can also be used to resolve disputes between states, leading in 1899 to the creation of the Permanent Court of Arbitration which facilitates the process by maintaining a list of arbitrators. This process was used in the Island of Palmas case and to resolve disputes during the Eritrean-Ethiopian war.

    The ICJ operates as one of the six organs of the UN, based out of the Hague with a panel of fifteen permanent judges. It has jurisdiction to hear cases involving states but cannot get involved in disputes involving individuals or international organizations. The states that can bring cases must be party to the Statute of the ICJ, although in practice most states are UN members and would therefore be eligible. The court has jurisdiction over all cases that are referred to it and all matters specifically referred to in the UN Charter or international treaties, although in practice there are no relevant matters in the UN Charter. The ICJ may also be asked by an international organisation to provide an advisory opinion on a legal question, which are generally considered non-binding but authoritative.

    Conflict of laws, also known as private international law, was originally concerned with choice of law, determining which nation’s laws should govern a particular legal circumstance. Historically the comity theory has been used although the definition is unclear, sometimes referring to reciprocity and sometimes being used as a synonym for private international law. Story distinguished it from “any absolute paramount obligation, superseding all discretion on the subject”. There are three aspects to conflict of laws – determining which domestic court has jurisdiction over a dispute, determining if a domestic court has jurisdiction and determining whether foreign judgments can be enforced. The first question relates to whether the domestic court or a foreign court is best placed to decide the case. When determining the national law that should apply, the lex causae is the law that has been chosen to govern the case, which is generally foreign, and the lexi fori is the national law of the court making the determination. Some examples are lex domicilii, the law of the domicile, and les patriae, the law of the nationality.

    The rules which are applied to conflict of laws will vary depending on the national system determining the question. There have been attempts to codify an international standard to unify the rules so differences in national law cannot lead to inconsistencies, such as through the Hague Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters and the Brussels Regulations. These treaties codified practice on the enforcement of international judgments, stating that a foreign judgment would be automatically recognised and enforceable where required in the jurisdiction where the party resides, unless the judgement was contrary to public order or conflicted with a local judgment between the same parties. On a global level, the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards was introduced in 1958 to internationalise the enforcement of arbitral awards, although it does not have jurisdiction over court judgments.

    A state must prove that it has jurisdiction before it can exercise its legal authority. This concept can be divided between prescriptive jurisdiction, which is the authority of a legislature to enact legislation on a particular issue, and adjudicative jurisdiction, which is the authority of a court to hear a particular case. This aspect of private international law should first be resolved by reference to domestic law, which may incorporate international treaties or other supranational legal concepts, although there are consistent international norms. There are five forms of jurisdiction which are consistently recognised in international law; an individual or act can be subject to multiple forms of jurisdiction. The first is the territorial principle, which states that a nation has jurisdiction over actions which occur within its territorial boundaries. The second is the nationality principle, also known as the active personality principle, whereby a nation has jurisdiction over actions committed by its nationals regardless of where they occur. The third is the passive personality principle, which gives a country jurisdiction over any actions which harm its nationals. The fourth is the protective principle, where a nation has jurisdiction in relation to threats to its “fundamental national interests”. The final form is universal jurisdiction, where a country has jurisdiction over certain acts based on the nature of the crime itself.

    Following World War II, the modern system for international human rights was developed to make states responsible for their human rights violations. The UN Economic and Security Council established the UN Commission on Human Rights in 1946, which developed the Universal Declaration of Human Rights (UDHR), which established non-binding international human rights standards, for work, standards of living, housing and education, non-discrimination, a fair trial and prohibition of torture. Two further human rights treaties were adopted by the UN in 1966, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). These two documents along with the UDHR are considered the International Bill of Human Rights.

    Non-domestic human rights enforcement operates at both the international and regional levels. Established in 1993, the Office of the UN High Commissioner for Human Rights supervises Charter-based and treaty-based procedures. The former are based on the UN Charter and operate under the UN Human Rights Council, where each global region is represented by elected member states. The Council is responsible for Universal Periodic Review, which requires each UN member state to review its human rights compliance every four years, and for special procedures, including the appointment of special rapporteurs, independent experts and working groups. The treaty-based procedure allows individuals to rely on the nine primary human rights treaties:

    International Convention on the Elimination of All Forms of Racial Discrimination


    ICCPR


    ICESCR


    Convention on the Elimination of All forms of Discrimination against Women


    Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment


    Convention on the Rights of the Child


    International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families

    Convention on the Rights of Persons with Disabilities

    International Convention for the Protection of All Persons from Enforced Disappearance – to enforce their rights.

    The regional human rights enforcement systems operate in Europe, Africa and the Americas through the European Court of Human Rights, the Inter-American Court of Human Rights and the African Court on Human and Peoples’ Rights. International human rights has faced criticism for its Western focus, as many countries were subject to colonial rule at the time that the UDHR was drafted, although many countries in the Global South have led the development of human rights on the global stage in the intervening decades.

    International labour law is generally defined as “the substantive rules of law established at the international level and the procedural rules relating to their adoption and implementation”. It operates primarily through the International Labor Organization (ILO), a UN agency with the mission of protecting employment rights which was established in 1919. The ILO has a constitution setting out a number of aims, including regulating work hours and labour supply, protecting workers and children and recognising equal pay and the right to free association, as well as the Declaration of Philadelphia of 1944, which re-defined the purpose of the ILO. The 1998 Declaration on Fundamental Principles and Rights at Work further binds ILO member states to recognise fundamental labour rights including free association, collective bargaining and eliminating forced labour, child labour and employment discrimination.

    The ILO have also created labour standards which are set out in their conventions and recommendations. Member states then have the choice as to whether or not to ratify and implement these standards. The secretariat of the ILO is the International Labour Office, which can be consulted by states to determine the meaning of a convention, which forms the ILO’s case law. Although the Right to Organise Convention does not provide an explicit right to strike, this has been interpreted into the treaty through case law. The UN does not specifically focus on international labour law, although some of its treaties cover the same topics. Many of the primary human rights conventions also form part of international labour law, providing protection in employment and against discrimination on the grounds of gender and race.

    It has been claimed that there is no concept of discrete international environmental law, with the general principles of international law instead being applied to these issues. Since the 1960s, a number of treaties focused on environmental protection were ratified, including the Declaration of the United Nations Conference on the Human Environment of 1972, the World Charter for Nature of 1982, and the Vienna Convention for the Protection of the Ozone Layer of 1985. States generally agreed to co-operate with each other in relation to environmental law, as codified by principle 24 of the Rio Declaration of 1972. Despite these, and other, multilateral environmental agreements covering specific issues, there is no overarching policy on international environmental protection or one specific international organisation, with the exception of the UN Environmental Programme. Instead, a general treaty setting out the framework for tackling an issue has then been supplemented by more specific protocols.

    Climate change has been one of the most important and heavily debated topics in recent environmental law. The United Nations Framework Convention on Climate Change, intended to set out a framework for the mitigation of greenhouse gases and responses to resulting environmental changes, was introduced in 1992 and came into force two years later. As of 2023, 198 states were a party. Separate protocols have been introduced through conferences of the parties, including the Kyoto Protocol which was introduced in 1997 to set specific targets for greenhouse gas reduction and the 2015 Paris Agreement which set the goal of keeping global warming at least below 2 °C (3.6 °F) above pre-industrial levels.

    Individuals and organisations have some rights under international environmental law as the Aarhus Convention in 1998 set obligations on states to provide information and allow public input on these issues. However few disputes under the regimes set out in environmental agreements are referred to the ICJ, as the agreements tend to specify their compliance procedures. These procedures generally focus on encouraging the state to once again become compliant through recommendations but there is still uncertainty on how these procedures should operate and efforts have been made to regulate these processes although some worry that this will undercut the efficiency of the procedures themselves.

    Legal territory can be divided into four categories. There is territorial sovereignty which covers land and territorial sea, including the airspace above it and the subsoil below it, territory outside the sovereignty of any state, res nullius which is not yet within territorial sovereignty but is territory that is legally capable of being acquired by a state and res communis which is territory that cannot be acquired by a state. There have historically been five methods of acquiring territorial sovereignty, reflecting Roman property law: occupation, accretion, cession, conquest and prescription.

    The law of the sea is the area of international law concerning the principles and rules by which states and other entities interact in maritime matters. It encompasses areas and issues such as navigational rights, sea mineral rights, and coastal waters jurisdiction. The law of the sea was primarily composed of customary law until the 20th century, beginning with the League of Nations Codification Conference in 1930, the UN Conference on the Law of the Sea and the adoption of the UNCLOS in 1982. The UNCLOS was particularly notable for making international courts and tribunals responsible for the law of the sea.

    The boundaries of a nation’s territorial sea were initially proposed to be three miles in the late 18th century. The UNCLOS instead defined it as being at most 12 nautical miles from the baseline (usually the coastal low-water mark) of a state; both military and civilian foreign ships are allowed innocent passage through these waters despite the sea being within the state’s sovereignty. A state can have jurisdiction beyond its territorial waters where it claims a contiguous zone of up to 24 nautical miles from its baseline for the purpose of preventing the infringement of its “customs, fiscal, immigration and sanitary regulations”. States are also able to claim an exclusive economic zone (EEZ) following passage of the UNCLOS, which can stretch up to 200 nautical miles from the baseline and gives the sovereign state rights over natural resources. Some states have instead chosen to retain their exclusive fishery zones, which cover the same territory. There are specific rules in relation to the continental shelf, as this can extend further than 200 nautical miles. The International Tribunal for the Law of the Sea has specified that a state has sovereign rights over the resources of the entire continental shelf, regardless of its distance from the baseline, but different rights apply to the continental shelf and the water column above it where it is further than 200 nautical miles from the coast.

    The UNCLOS defines the high seas as all parts of the sea that are not within a state’s EEZ, territorial sea or internal waters. There are six freedoms of the high seas—navigation, overflight, laying submarine cables and pipelines, constructing artificial islands, fishing and scientific research—some of which are subject to legal restrictions. Ships in the high seas are deemed to have the nationality of the flag that they have the right to fly and no other state can exercise jurisdiction over them; the exception is ships used for piracy, which are subject to universal jurisdiction.

    In 1944, the Bretton Woods Conference established the International Bank for Reconstruction and Development (later the World Bank) and the IMF. At the conference, the International Trade Organization was proposed but failed to be instituted due to the refusal of the United States to ratify its charter. Three years later, Part IV of the statute was adopted to create the General Agreement on Tariffs and Trade, which operated between 1948 and 1994, when the WTO was established. The OPEC, which banded together to control global oil supply and prices, caused the previous reliance on fixed currency exchange rates to be dropped in favour of floating exchange rates in 1971. During this recession, British Prime Minister Margaret Thatcher and US President Ronald Reagan pushed for free trade and deregulation under a neo-liberal agenda known as the Washington Consensus.

    The law relating to the initiation of armed conflict is jus ad bellum. This was codified in 1928 in the Kellogg–Briand Pact, which stated that conflicts should be settled through peaceful negotiations with the exception, through reservations drafted by some state parties, of self-defence. These fundamental principles were re-affirmed in the UN Charter, which provided for “an almost absolute prohibition on the use of force”, with the only three exceptions. The first involves force authorised by the UNSC, as the entity is responsible in the first instance for responding to breaches or threats to the peace and acts of aggression, including the use of force or peacekeeping missions. The second exception is where a state is acting in individual or collective self-defence. A state is allowed to act in self-defence in the case of an “armed attack” but the intention behind this exception has been challenged, particularly as nuclear weapons have become more common, with many states relying instead on the customary right of self-defence as set out in the Caroline test. The ICJ considered collective self-defence in Nicaragua v. United States, where the U.S. unsuccessfully argued that it had mined harbours in Nicaragua in pre-emption of an attack by the Sandinista government against another member of the Organization of American States. The final exception is where the UNSC delegates its responsibility for collective security to a regional organisation, such as NATO.

    On humanitarian grounds, the use of landmines (Ottawa Treaty) and cluster munitions (CCM) is prohibited under international law.

    International humanitarian law (IHL) is an effort to “mitigate the human suffering caused by war” and it is often complementary to the law of armed conflict and international human rights law. The concept of jus in bello (law in war) covers IHL, which is distinct from jus ad bellum. Its scope lasts from the initiation of conflict until a peaceful settlement is reached. There are two main principles in IHL; the principle of distinction dictates that combatants and non-combatants must be treated differently and the principle of not causing disproportionate suffering to combatants. In Legality of the Threat or Use of Nuclear Weapons, the ICJ described these concepts as “intransgressible principles of international customary law”.

    The two Hague Conventions of 1899 and 1907 considered restrictions on the conduct of war and the Geneva Conventions of 1949, which were organised by the International Committee of the Red Cross, considered the protection of innocent parties in conflict zones. The First Geneva Convention covers wounded and ill combatants, the Second Geneva Convention covers combatants at sea who are wounded, ill or shipwrecked, the Third Geneva Convention covers prisoners of war and the Fourth Geneva Convention covers civilians. These conventions were supplemented the additional Protocol I and Protocol II, which were codified in 1977. Initially IHL conventions were only considered to apply to a conflict if all parties had ratified the relevant convention under the si omnes clause, but this posed concerns and the Martens clause began to be implemented, providing that the law would generally be deemed to apply.

    There have been various agreements to outlaw particular types of weapons, such as the Chemical Weapons Convention and the Biological Weapons Convention. The use of nuclear weapons was determined to be in conflict with principles of IHL by the ICJ in 1995, although the court also held that it “cannot conclude definitively whether the threat or use of nuclear weapons would be lawful or unlawful in an extreme circumstance of self-defence.” Multiple treaties have attempted to regulate the use of these weapons, including the Non-Proliferation Treaty and the Joint Comprehensive Plan of Action, but key states have failed to sign or have withdrawn. There have been similar debates on the use of drones and cyberwarefare on the international stage.

    International criminal law sets out the definition of international crimes and compels states to prosecute these crimes. While war crimes were prosecuted throughout history, this has historically been done by national courts. The International Military Tribunal in Nuremberg and the International Military Tribunal for the Far East in Tokyo were established at the end of World War II to prosecute key actors in Germany and Japan. The jurisdiction of the tribunals was limited to crimes against peace (based on the Kellogg–Briand Pact), war crimes (based on the Hague Conventions) and crimes against humanity, establishing new categories of international crime. Throughout the twentieth century, the separate crimes of genocide, torture and terrorism were also recognised.

    Initially these crimes were intended to be prosecuted by national courts and subject to their domestic procedures. The Geneva Conventions of 1949, the Additional Protocols of 1977 and the 1984 UN Convention against Torture mandated that the national courts of the contracting countries must prosecute these offenses where the perpetrator is on their territory or extradite them to any other interested state. It was in the 1990s that two ad hoc tribunals, the International Criminal Tribunal for the Former Yugoslavia (ICTY) and the International Criminal Tribunal for Rwanda (ICTR), were established by the UNSC to address specific atrocities. The ICTY had authority to prosecute war crimes, crimes against humanity and genocide occurring in Yugoslavia after 1991 and the ICTR had authority to prosecute genocide, crimes against humanity and grave breaches of the 1949 Geneva Conventions during the 1994 Rwandan genocide.

    The International Criminal Court (ICC), established by the 1998 Rome Statute, is the first and only permanent international court to prosecute genocide, war crimes, crimes against humanity, and the crime of aggression. There are 123 state parties to the ICC although a number of states have declared their opposition to the court; it has been criticised by African countries including The Gambia and Kenya for “imperialist” prosecutions. One particular aspect of the court that has received scrutiny is the principle of complementarity, whereby the ICC only has jurisdiction if the national courts of a state with jurisdiction are “unwilling or unable to prosecute” or where a state has investigated but chosen not to prosecute a case. The United States has a particularly complicated relationship with the ICC; originally signing the treaty in 2000, the US stated in 2002 that it did not intend to become a party as it believed the ICC threatened its national sovereignty and the country does not recognise the court’s jurisdiction.

    Hybrid courts are the most recent type of international criminal court; they aim to combine both national and international components, operating in the jurisdiction where the crimes in question occurred. International courts have been criticised for a lack of legitimacy, as they can seem disconnected from the crimes that have occurred, but the hybrid courts are able to provide the resources that may be lacking in countries facing the aftermath of serious conflict. There has been debate about what courts can be included within this definition, but generally the Special Panels for Serious Crimes in East Timor, the Kosovo Specialist Chambers, the Special Court for Sierra Leone, the Special Tribunal for Lebanon and the Extraordinary Chambers in the Courts of Cambodia have been listed.

    International legal theory comprises a variety of theoretical and methodological approaches used to explain and analyse the content, formation and effectiveness of international law and institutions and to suggest improvements. Some approaches center on the question of compliance: why states follow international norms in the absence of a coercive power that ensures compliance. Some scholars view compliance failure as a problem of enforcement whereby states can be incentivized to follow international law due to international inducements, reciprocity, concerns about reputation, or domestic political factors. Other scholars see compliance failure as rooted in a lack of state capacity where a willing state is incapable of fully following international legal commitments. Rationalist choice theorists have referred to the “Three Rs” that lead states to comply with international law: Reciprocity, Reputation, and Retaliation. Constructivist scholars emphasize how states are socialized into complying with international law by internalizing norms and seeking status and reputation.

    Other perspectives are policy oriented: they elaborate theoretical frameworks and instruments to criticize the existing norms and to make suggestions on how to improve them. Some of these approaches are based on domestic legal theory, some are interdisciplinary, and others have been developed expressly to analyse international law. Classical approaches to International legal theory are the natural law, the Eclectic and the legal positivism schools of thought.

    The natural law approach argues that international norms should be based on axiomatic truths. The 16th-century natural law writer de Vitoria examined the questions of the just war, the Spanish authority in the Americas, and the rights of the Native American peoples. In 1625, Grotius argued that nations as well as persons ought to be governed by universal principle based on morality and divine justice while the relations among polities ought to be governed by the law of peoples, the jus gentium, established by the consent of the community of nations on the basis of the principle of pacta sunt servanda, that is, on the basis of the observance of commitments. On his part, de Vattel argued instead for the equality of states as articulated by 18th-century natural law and suggested that the law of nations was composed of custom and law on the one hand, and natural law on the other. During the 17th century, the basic tenets of the Grotian or eclectic school, especially the doctrines of legal equality, territorial sovereignty, and independence of states, became the fundamental principles of the European political and legal system and were enshrined in the 1648 Peace of Westphalia.

    The early positivist school emphasized the importance of custom and treaties as sources of international law. In the 16th-century, Gentili used historical examples to posit that positive law (jus voluntarium) was determined by general consent. van Bynkershoek asserted that the bases of international law were customs and treaties commonly consented to by various states, while John Jacob Moser emphasized the importance of state practice in international law. The positivism school narrowed the range of international practice that might qualify as law, favouring rationality over morality and ethics. The 1815 Congress of Vienna marked the formal recognition of the political and international legal system based on the conditions of Europe. Modern legal positivists consider international law as a unified system of rules that emanates from the states’ will. International law, as it is, is an “objective” reality that needs to be distinguished from law “as it should be”. Classic positivism demands rigorous tests for legal validity and it deems irrelevant all extralegal arguments.

    John Austin asserted that due to the principle of par in parem non habet imperium, “so-called” international law, lacking a sovereign power and so unenforceable, was not really law at all, but “positive morality”, consisting of “opinions and sentiments…more ethical than legal in nature.” Since states are few in number, diverse and atypical in character, unindictable, lack a centralised sovereign power, and their agreements unpoliced and decentralised, Martin Wight argued that international society is better described as anarchy.

    Hans Morgenthau believed international law to be the weakest and most primitive system of law enforcement; he likened its decentralised nature to the law that prevails in preliterate tribal societies. Monopoly on violence is what makes domestic law enforceable; but between nations, there are multiple competing sources of force. The confusion created by treaty laws, which resemble private contracts between persons, is mitigated only by the relatively small number of states. He asserted that no state may be compelled to submit a dispute to an international tribunal, making laws unenforceable and voluntary. International law is also unpoliced, lacking agencies for enforcement. He cites a 1947 US opinion poll in which 75% of respondents wanted “an international police to maintain world peace”, but only 13% wanted that force to exceed the US armed forces. Later surveys have produced similar contradictory results.

    International law is currently navigating a complex array of challenges and controversies that have underscored the dynamic nature of international relations in the 21st century. Some of these challenges include enforcement difficulties, the impact of technological advancements, climate change, and worldwide pandemics. The possible re-emergence of right of conquest as international law is contentious.

    Among the most pressing issues are enforcement difficulties, where the lack of a centralized global authority often leads to non-compliance with international norms, particularly evident in violations of International Humanitarian Law (IHL). Sovereignty disputes further complicate the international legal landscape, as conflicts over territorial claims and jurisdictional boundaries arise, challenging the principles of non-interference and peaceful resolution. Furthermore, the emergence of new global powers introduces additional layers of complexity, as these nations assert their interests and challenge established norms, necessitating a reevaluation of the global legal order to accommodate shifting power dynamics.

    Cybersecurity has also emerged as a critical concern, with international law striving to address the threats posed by cyber-attacks to national security, infrastructure, and individual privacy. Climate change demands unprecedented international cooperation, as evidenced by agreements like the Paris Agreement, though disparities in responsibilities among nations pose significant challenges to collective action.

    The COVID-19 pandemic has further highlighted the interconnectedness of the global community, emphasizing the need for coordinated efforts to manage health crises, vaccine distribution, and economic recovery.

    These contemporary issues underscore the need for ongoing adaptation and cooperation within the framework of international law to address the multifaceted challenges of the modern world, ensuring a just, peaceful, and sustainable global order.

  • Water drum

    Water drums are a category of membranophone characterized by the filling of the drum chamber with some amount of water to create a unique resonant sound. Water drums are used all over the world, but are found most prominently in a ceremonial as well as social role in the Indigenous music of North America, as well as in African music. The drums are most often made from a pot of clay, ceramic, wood or metal, with a small amount of liquid inside and topped with drum head consisting of a stretched membrane, usually of some type of animal hide.

    Water drumming, the tambor de agua (Spanish: drum of water), bungo, or liquindi, of African origin, is water, such as a river, which is played by striking the surface directly with one’s hands. It is performed by the Baka in Africa, and in South America by the descendants of formerly enslaved people, with strokes comparable to the culoepuya.

    Construction

    Historically, water drums have most often been made with a body of wood or clay, with a skin drum head. Wooden water drums are by made either hollowing out a solid section of a small soft wood log, or assembled using cedar slats and banded like a wooden keg. Clay drums are either handmade for this purpose, or an old crock is used. Wyandot, Seneca, and Cayuga people traditionally use groundhog skin for the drum head, though deer skin is also sometimes used. An Iroquoian or Wendat/Wyandot drum stick is carved from a piece of hardwood with a small rounded tip. The tone of the drum changes based on the amount of water in the vessel, as well as how tight or loose the head is.

    Modern Native American Church ceremonies often use a water drum made from iron, brass or copper kettle. These styles of water drum are now more common than the traditional woodland forms. The distinctive sound of the drum characteristic of the Native American Church is created because: “The water inside is in constant motion and produces a special resonance. The player’s thumb, pressed against the drum head, holds the tone at a constant pitch which then drops a fifth or more when the pressure is relaxed between songs.”

    Use

    Native American

    Water drums are common in Native American music, and are used ceremonially among Indigenous peoples of both North and South America.

    North America

    In North America, Iroquois, Navajo, Cherokee, Muscogee, and Apache peoples use water drums in music, and they are used both ceremonially and in traditional Longhouse social dances among the Huron/Wendat/Wyandot and Iroquois/Haudenosaune peoples. The Ojibwa, Odawa and Pottawatomii traditionally call the drums midegwakikoon,with “Mide” referring to the Midewiwin medicine societies. Water drums are used in Yaqui deer dance music, representing the deer’s heartbeat.

    South America

    In South America, the cataquí is a water drum used by the Toba, Wichí, Pilagá, Chorote and Nivaclé cultures in the South American Gran Chaco region. The cataquí is made from a hollowed out tree trunk or ceramic pot, into which water is poured. The mouth is closed with a leather skin, made from corzuela hide (Red brocket deer skin), which is hit with a single stick.The cataquí has been used in ceremonies, including the carob, and has also been used in calling songs at dances, for couples to form.

    Africa

    In Central Africa, water drums are the major component of Baka music. In some areas of the Congo and Cameroon its use is reserved for women, specifically women hunters, and used in the ceremonies they hold before they go on hunts.

    In Tuareg music, the askalabo is a calabash “partly submerged in water, drummed to mimic camels’ hooves”.

    Pop culture

    Since approximately 2006, the American heavy metal band, Mushroomhead have used nontraditional water drums in their live show – mainly for visual purposes.

  • Lummi stick

    Lummi sticks, named after the Lummi Native American peoples, are hardwood cylindrical sticks, usually roughly 7 inches long and 0.75 inches in diameter, used as percussive musical instruments. They are generally struck against one another, and used frequently in musical education to teach rhythm.

    Another variety, called simply a rhythm stick, is 12 inches long and painted blue. These are generally either cylindrical or fluted, and come in sets containing an equal number of both.

    The sticks are used in elementary school education in the US and Canada.

  • Footed drum

    A footed drum is a class of membranophone, of Native American and Polynesian origin, characterized by an open area at the bottom of the instrument, held by feet. This open area adds resonance to the drum’s sound. It is made out of hollow wood and/or bone.

    Archaeologists have unearthed ‘foot drums’ in several southwestern and central-Californian Native American archaeological sites inhabited, or formally inhabited, by the Miwok, Maidu, Aztec, and Hopi Indian tribes. These drums were often semicircle cross-sectioned hollow logs laid over wood covered ‘resonating’ pits positioned according to custom in kivas or dance houses. The foot drums were played by stomping on top of the hollow log with the structure’s poles used for steadying..