Sociology is the study of human social life. Human social life is
complex and encompasses many facets of the human experience. Because of
the complexity, the discipline of sociology has been subdivided over
time into specialty areas.
The Foundations of Sociology
Sociology is the study of human social life. Sociology has many sub-sections of study, ranging from the analysis of conversations to the development of theories to try to understand how the entire world works. This chapter will introduce you to sociology and explain why it is important, how it can change your perspective of the world around you, and give a brief history of the discipline.
General Sociological Theory
Deviance and Norms
Race and Ethnicity
Health and Medicine
Being a Sociologist
Sociology is the study of human social life. Sociology has many sub-sections of study, ranging from the analysis of conversations to the development of theories to try to understand how the entire world works. This chapter will introduce you to sociology and explain why it is important, how it can change your perspective of the world around you, and give a brief history of the discipline.
What is Sociology?
Sociology and Other Social Sciences
The social sciences comprise the application of scientific methods to the study of the human aspects of the world. Psychology studies the human mind and micro-level (or individual) behavior; sociology examines human society; political science studies the governing of groups and countries; communication studies the flow of discourse via various media; economics concerns itself with the production and allocation of wealth in society; and social work is the application of social scientific knowledge in society. Social sciences diverge from the humanities in that many in the social sciences emphasize the scientific method or other rigorous standards of evidence in the study of humanity.
The goal of this chapter is to introduce the methods employed by sociologists in their study of social life. This is not a chapter on statistics nor does it detail specific methods in sociological investigation. The primary aim is to illustrate how sociologists go beyond common sense understandings in trying to explain or understand social phenomena.
Sociology and Other Social Sciences
The Scientific Method
Correlation and Causation
Quantitative and Qualitative
Objective vs. Critical
What Can Sociology Tell Us?
The simplest definition of society is a group of people who share a defined territory and a culture. In sociology, we take that definition a little further by arguing that society is also the social structure and interactions of that group of people. Social structure is the relatively enduring patterns of behavior and relationships within a society. Thus, a society is not only the group of people and their culture, but the relationships between the people and the institutions within that group.
In sociology, a distinction is made between society and culture. Culture refers to the norms, values, beliefs, behaviors, and meanings given to symbols in a society. Culture is distinct from society in that it adds meanings to relationships. For instance, what it means to be a "husband" to a gay couple in Boston is very different from what it means to be a husband to a polygamist man in rural southern Utah. Thus, while the relationship exists in both (i.e., they both have social structure), what the relationship means differs by culture.
All human societies have a culture and culture can only exist where there is a society. Sociologists distinguish between society and culture despite their close interconnectedness primarily for analytical purposes: It allows sociologists to think about societal development independent of culture and cultural change (which are discussed in the next chapter in greater detail) even though societal change and development are contingent upon culture.
This chapter presents a brief overview of some of the types of human societies that have existed and continue to exist. It will then present some classic approaches to understanding society and what changing social structure can mean for individuals.
Classical Views on Social Change
Subcultures and Countercultures
Ethnocentrism and Cultural Relativism
Theories of Culture
Cultural Sociology: Researching Culture
Elements of Socialization
Theoretical Understandings of Socialization
In sociology, a group is usually defined as a number of people who identify and interact with one another. This is a very broad definition, as it includes groups of all sizes, from dyads to whole societies. While an aggregate comprises merely a number of individuals, a group in sociology exhibits cohesiveness to a larger degree. Aspects that members in the group may share include: interests, values, ethnic/linguistic background, roles and kinship. One way of determining if a collection of people can be considered a group is if individuals who belong to that collection use the self-referent pronoun "we;" using "we" to refer to a collection of people often implies that the collection thinks of itself as a group. Examples of groups include: families, companies, circles of friends, clubs, local chapters of fraternities and sororities, and local religious congregations.
Collections of people that do not use the self-referent pronoun "we" but share certain characteristics (e.g., roles, social functions, etc.) are different from groups in that they usually do not regularly interact with each other nor share similar interests or values. Such collections are referred to as categories of people rather than groups; examples include: police, soldiers, millionaires, women, etc.
Individuals form groups for a variety of reasons. There are some rather obvious ones, like reproduction, protection, trade, and food production. But social categorization of people into groups and categories also facilitates behavior and action. An example may help explain this idea:
Because groups and categories help facilitate social behavior, you know who this individual is: a member of a law enforcement category like the police or highway patrol. In all likelihood, you do not have to question this individual as to why they are driving a special car with lights on it, why they are wearing a uniform, why they are carrying a gun, or why they pulled you over (you may ask why they pulled you over, but doing so often increases the likelihood they'll give you a ticket). In short, because you recognize that the individual driving the car belongs to a specific social category (or group), you can enter this interaction with a body of knowledge that will help guide your behavior. You do not have to learn how to interact in that situation every single time you encounter it. Social categorization of people into groups and categories is a heuristic device that makes social interaction easier.
Social Identity Theory
Primary and Secondary Groups
Demography is the study of human population dynamics. It encompasses the study of the size, structure and distribution of populations, and how populations change over time due to births, deaths, migration, and aging. Demographic analysis can relate to whole societies or to smaller groups defined by criteria such as education, religion, or ethnicity.
Why study demography?
Data and Methods
The Demographic Transition
Population Growth and Overpopulation
Aging (often spelt ageing) is the process of becoming older. Aging is a biological process, but it is of interest to sociologists because it is accompanied by changes in how one is viewed and treated in society.
Dividing the lifespan
Socially Constructed Aspects of Aging
Differential Treatment by Age or Ageism
Age and Income
Age and Gender
Age and Race
Aging in the US
Global Aging Trends
Aging and Health
Deviance is any behavior that violates cultural norms. Deviance is often divided into two types of deviant activities. The first, crime is the violation of formally enacted laws and is referred to as formal deviance. Examples of formal deviance would include: robbery, theft, rape, murder, and assault, just to name a few. The second type of deviant behavior refers to violations of informal social norms, norms that have not been codified into law, and is referred to as informal deviance. Examples of informal deviance might include: picking one's nose, belching loudly (in some cultures), or standing too close to another unnecessarily (again, in some cultures).
As the last two examples in the preceding paragraph illustrate, deviance can vary quite dramatically from culture to culture. Cultural norms are relative; this makes deviant behavior relative as well. For instance, in general U.S. society it is uncommon for people to restrict their speech to certain hours of the day. In the Christ Desert Monastery, there are specific rules about when the residents can and cannot speak, including a specific ban on speaking between 7:30 pm and 4:00 am. The norms and rules of the Christ Desert Monastery are examples of how norms are relative to cultures.
Current research on deviance by sociologists takes many forms. For example, Dr. Karen Halnon of Pennsylvania State University studies how some people exercise informal deviance. Her research focuses on what she calls "deviance vacations," where people of certain socioeconomic status descend to lower strata. For instance, heterosexual white males may become drag queens on the weekend. It is a vacation because heterosexual white males can afford to descend temporarily and then return to the advantages of their true socioeconomic status. Other examples include white hip-hop acts like Eminem and Nu-Metal bands like Limp Bizkit that mimic lower or middle class people in order to use their socioeconomic credentials for profit, despite their true socioeconomic status.
Sociological interest in deviance includes both interests in measuring formal deviance (statistics of criminal behavior; see below) and a number of theories that try to explain both the role of deviance in society and its origins. This chapter will cover the theories of deviance used by sociologists and will also cover current crime statistics.
Theories of Deviance
Race and Ethnicity
The Changing Definitions of Race
Social Construct or Biological Lineage
Prejudice, Bias, and Discrimination
A Research Example
Gender vs. Sex
Social and Psychological Differences
Theories of Gender Differences
Social class in America
Intersections of Class, Race, and Gender
Capitalism, class, privilege, and oppression
Capitalism, race and gender inequality
Points of View: Micro-Meso-Macro
Consequences of Inequality
Economy refers to the ways people use their environment to meet their material needs. It is the realized economic system of a country or other area. It includes the production, exchange, distribution, and consumption of goods and services of that area. A given economy is the end result of a process that involves its technological evolution, history and social organization, as well as its geography, natural resource endowment, and ecology, among other factors. These factors give context, content, and set the conditions and parameters in which an economy functions.
Most people have a network of others they consider their family. But what exactly is a family? How do family structures vary from culture to culture or subculture to subculture? Is what we consider the traditional family changing over time? Why do families exist? These are the questions sociologists ask when discussing families.
What is a family?
Families and Theory
The Family Life Cycle
Families and Inequality
New Developments in Families
Sociologists study religion the same way they study other social institutions, like education or government. The aim is primarily to understand religions, but included in trying to understand religions is the aim of trying to predict what religions will eventually do (or what will become of religions). To do this, sociologists employ demographic techniques, survey analysis, ethnography, and various other methodological approaches. It is important to note at the beginning of this chapter that sociologists study religion not to prove, disprove or normatively evaluate religion. Sociologists aren't interested in whether a religion is right or wrong. This requires sociologists to assume a relativistic perspective that basically takes a neutral stance toward issues of right or wrong or true or false. That said, the social scientific study of religion can be challenging from a faith standpoint as it provides alternative, naturalistic explanations for many elements of religion (e.g., the sources of conversion experiences).
Definitions of Religion
The Church-Sect Typology
Theories of Religion
World Religions and Religious History
Religion and Other Social Factors
The Future of Religion
Paying for Medical Care
Behavior and Environmental Influences on Health
The Evolution of Health Care and Medicine
Politics is the process by which groups of people make decisions. The term is generally applied to behavior within civil governments, but politics has been observed in all human group interactions, including corporate, academic, and religious institutions. It consists of social relations involving authority or power, the regulation of political units, and the methods and tactics used to formulate and apply social policy.
Power, Authority, and Violence
Types of Governments
Voting Patterns and Inequality
Why Study Collective Behavior?
Types of Social Movements
Stages in Social Movements
Social Movement Theories
New Social Movement theories
Examples of Social Movements
Now that you know what sociologists study, you might be wondering: What can you do with a degree in sociology? In an ideal world, just studying social problems would make them go away. But, alas, as you've learned from reading this book, we don't live in an ideal world and there is far more to do than just study social life with a degree in Sociology. The American Sociological Association, among others, has looked into this question and have some pretty intriguing answers. What follows is a brief explanation of what you can do with a degree in Sociology.
Bachelor's Degree Occupations
PhD Degree Occupations
Personal Experiences of Individuals With Degrees in Sociology
The following are videos - including fiction movies, non-fiction documentaries and recorded lectures - examining topics in the field of sociology.
A new website with video clips in Sociology, The Sociological Cinema, has some video clips available as well: http://www.thesociologicalcinema.com/index.html
Sociology is a branch of the social sciences that uses systematic methods of empirical investigation and critical analysis to develop and refine a body of knowledge about human social structure and activity, sometimes with the goal of applying such knowledge to the pursuit of government policies designed to benefit the general social welfare. Its subject matter ranges from the micro level to the macro level. Microsociology involves the study of people in face-to-face interactions. Macrosociology involves the study of widespread social processes.
Sociology is a broad discipline in terms of both methodology and subject matter. Its traditional focuses have included social relations, social stratification, social interaction, culture and deviance, and its approaches have included both qualitative and quantitative research techniques. As much of what humans do fits under the category of social structure or social activity, sociology has gradually expanded its focus to such far-flung subjects as the study of economic activity, health disparities, and even the role of social activity in the creation of scientific knowledge. The range of social scientific methods has also been broadly expanded. The "cultural turn" of the 1970s and 1980s brought more humanistic interpretive approaches to the study of culture in sociology. Conversely, the same decades saw the rise of new mathematically rigorous approaches, such as social network analysis.
The social world is changing. Some argue it is growing; others say it is shrinking. The important point to grasp is: society does not remain unchanged over time. As will be discussed in more detail below, sociology has its roots in significant societal changes (e.g., the industrial revolution, the creation of empires, and the age of enlightenment of scientific reasoning). Early practitioners developed the discipline as an attempt to understand societal changes.
Some early sociological theorists (e.g., Marx, Weber, and Durkheim) were disturbed by the social processes they believed to be driving the change, such as the quest for solidarity, the attainment of social goals, and the rise and fall of classes, to name a few examples. The founders of sociology were some of the earliest individuals to employ what C. Wright Mills (a prominent mid-20th century American sociologist) labeled the sociological imagination: the ability to situate personal troubles within an informed framework of social issues. Mills proposed that:
As Mills saw it, the sociological imagination could help individuals cope with the social world by helping them to step outside of their personal, self-centric view of the world. In employing the sociological imagination, people are able to see the events and social structures that influence behavior, attitudes, and culture.
The sociological imagination goes beyond armchair sociology or common sense. Many people believe they understand the world and the events taking place within it, even though they have not actually engaged in a systematic attempt to understanding the social world, as sociologists do. Humans like to attribute causes to events and attempt to understand what is taking place around them. This is why individuals have been using religious ceremonies for centuries to invoke the will of the gods - because they believed the gods controlled certain elements of the natural world (e.g., the weather). Just as sacrificing two goats to ensure the safe operation of a Boeing 757 (and propitiate Akash Bhairab, the Hindu sky god) is an attempt to influence the natural world without first trying to understand how it works, armchair sociology is an attempt to understand how the social world works without employing scientific methods.
It would be inaccurate to say sociologists never sit around (even sometimes in comfy armchairs) trying to figure out how the world works. But induction is just a first step in understanding the social world. In order to test their theories, sociologists get up from their armchairs and enter the social world. They gather data and evaluate their theories in light of the data they collect (a.k.a. deduction). Sociologists do not just propose theories about how the social world works. Sociologists test their theories about how the world works using the scientific method.
Sociologists, like all humans, have values, beliefs, and even pre-conceived notions of what they might find in doing their research. But, as Peter Berger, a well-known sociologist, has argued, what distinguishes the sociologist from non-scientific researchers is that "[the] sociologist tries to see what is there. He may have hopes or fears concerning what he may find. But he will try to see, regardless of his hopes or fears. It is thus an act of pure perception..."
Sociology, then, is an attempt to understand the social world by situating social events in their corresponding environment (i.e., social structure, culture, history) and trying to understand social phenomena by collecting and analyzing empirical data.
In ancient philosophy, there was no difference between the liberal arts of mathematics and the study of history, poetry or politics. Only with the development of mathematical proof did there gradually arise a perceived difference between scientific disciplines and the humanities or liberal arts. Thus, Aristotle studied planetary motion and poetry with the same methods; Plato mixed geometrical proofs with his demonstration on the state of intrinsic knowledge.
This unity of science as descriptive remained, for example, in the time of Thomas Hobbes, who argued that deductive reasoning from axioms created a scientific framework. His book, Leviathan, was a scientific description of a political commonwealth. Within decades of Hobbes' work a revolution took place in what constituted science, particularly with the work of Isaac Newton in physics. Newton, by revolutionizing what was then called natural philosophy, changed the basic framework by which individuals understood what was scientific.
While Newton was merely the archetype of an accelerating trend, the important distinction is that for Newton the mathematical flowed from a presumed reality independent of the observer and it worked by its own rules. For philosophers of the same period, mathematical expression of philosophical ideals were taken to be symbolic of natural human relationships as well: the same laws moved physical and spiritual reality. For examples see Blaise Pascal, Gottfried Leibniz and Johannes Kepler, each of whom took mathematical examples as models for human behavior directly. In Pascal's case, the famous wager; for Leibniz, the invention of binary computation; and for Kepler, the intervention of angels to guide the planets.
In the realm of other disciplines, this created a pressure to express ideas in the form of mathematical relationships. Such relationships, called Laws after the usage of the time (see philosophy of science) became the model that other disciplines would emulate. In the late 19th century, attempts to apply equations to statements about human behavior became increasingly common. Among the first were the Laws of philology, which attempted to map the change overtime of sounds in a language. In the early 20th century, a wave of change came to science. Statistics and probability theory were sufficiently developed to be considered "scientific", resulting in the widespread use of statistics in the social sciences (they are also widely used in most other sciences as well, including biology).
The first thinkers to attempt to combine scientific inquiry with the exploration of human relationships were Emile Durkheim in France and William James in the United States. Durkheim's sociological theories and James' work on experimental psychology had an enormous impact on those who followed.
One of the most persuasive advocates for the view of scientific treatment of philosophy is John Dewey (1859-1952). He began, as Marx did, in an attempt to weld Hegelian idealism and logic to experimental science, for example in his Psychology of 1887. However, it is when he abandoned Hegelian constructs and joined the movement in America called Pragmatism that he began to formulate his basic doctrine on the three phases of the process of inquiry:
With the rise of the idea of quantitative measurement in the physical sciences (see, for example Lord Rutherford's famous maxim that any knowledge that one cannot measure numerically "is a poor sort of knowledge"), the stage was set for the division of the study of humanity into the humanities and the social sciences.
Although sociology emerged from Comte's vision of a discipline that would subsume all other areas of scientific inquiry, that was not to be the future of sociology. Far from replacing the other sciences, sociology has taken its place as a particular perspective for investigating human social life.
In the past, sociological research focused on the organization of complex, industrial societies and their influence on individuals. Today, sociologists study a broad range of topics. For instance, some sociologists research macro-structures that organize society, such as race or ethnicity, social class, gender roles, and institutions such as the family. Other sociologists study social processes that represent the breakdown of macro-structures, including deviance, crime, and divorce. Additionally, some sociologists study micro-processes such as interpersonal interactions and the socialization of individuals. It should also be noted that recent sociologists, taking cues from anthropologists, have realized the Western emphasis of the discipline. In response, many sociology departments around the world are now encouraging multi-cultural research.
The next two chapters in this book will introduce the reader to more extensive discussions of the methods and theory employed in sociology. The remaining chapters are examinations of current areas of research in the discipline.
Sociology is a relatively new academic discipline. It emerged in the early 19th century in response to the challenges of modernity. Increasing mobility and technological advances resulted in the increasing exposure of people to cultures and societies different from their own. The impact of this exposure was varied, but for some people included the breakdown of traditional norms and customs and warranted a revised understanding of how the world works. Sociologists responded to these changes by trying to understand what holds social groups together and also exploring possible solutions to the breakdown of social solidarity.
Auguste Comte and Other Founders
Auguste Comte coined the term sociology in 1838. While no longer a theory of sociology, Comte believed society developed in stages: first the theological stage, then the metaphysical stage, and finally the scientific stage. Other founders of sociology include Karl Marx, Ferdinand Toennies, Emile Durkheim, Vilfredo Pareto, and Max Weber.
The Development of the Discipline
The first book with the term sociology in its title was written in the mid-19th century. In the United States, the first Sociology course was taught in 1890. The first full fledged university department of sociology in the United States was established in 1892 and the first European one was founded in 1895. In 1905 the American Sociological Association, the world's largest association of professional sociologists, was founded.
Early Sociological Studies
Quantitative vs. Qualitative Sociology
The term sociology was recoined by Auguste Comte (1798-1857) in 1838 from the Latin term socius (companion, associate) and the Greek term logia (study of, speech). Comte hoped to unify all the sciences under sociology; he believed sociology held the potential to improve society and direct human activity, including the other sciences.
While it is no longer a theory employed in Sociology, Comte argued for an understanding of society he labeled The Law of Three Stages. Comte, not unlike other enlightenment thinkers, believed society developed in stages. The first was the theological stage where people took a religious view of society. The second was the metaphysical stage where people understood society as natural (not supernatural). Comte's final stage was the scientific or positivist stage, which he believed to be the pinnacle of social development. In the scientific stage, society would be governed by reliable knowledge and would be understood in light of the knowledge produced by science, primarily sociology. While vague connections between Comte's Law and human history can be seen, it is generally understood in Sociology today that Comte's approach is a highly simplified and ill-founded approach to understand social development (see instead demographic transition theory and Ecological-Evolutionary Theory).
Other classical theorists of sociology from the late 19th and early 20th centuries include Karl Marx, Ferdinand Toennies, Emile Durkheim, Vilfredo Pareto, and Max Weber. As pioneers in Sociology, most of the early sociological thinkers were trained in other academic disciplines, including history, philosophy, and economics. The diversity of their trainings is reflected in the topics they researched, including religion, education, economics, psychology, ethics, philosophy, and theology. Perhaps with the exception of Marx, their most enduring influence has been on sociology, and it is in this field that their theories are still considered most applicable.
The first book with the term sociology in its title was written in the mid-19th century by the English philosopher Herbert Spencer. In the United States, the first Sociology course was taught at the University of Kansas, Lawrence in 1890 under the title Elements of Sociology (the oldest continuing sociology course in America). The first full fledged university department of sociology in the United States was established in 1892 at the University of Chicago by Albion W. Small, who in 1895 founded the American Journal of Sociology. The first European department of sociology was founded in 1895 at the University of Bordeaux by Emile Durkheim, founder of L'Année Sociologique (1896). In 1919 a sociology department was established in Germany at the Ludwig Maximilians University of Munich by Max Weber and in 1920 in Poland by Florian Znaniecki. The first sociology departments in the United Kingdom were founded after the Second World War.
International cooperation in sociology began in 1893 when Rene Worms founded the small Institut International de Sociologie that was eclipsed by the much larger International Sociologist Association starting in 1949. In 1905 the American Sociological Association, the world's largest association of professional sociologists, was founded.
Early sociological studies considered the field to be similar to the natural sciences, like physics or biology. As a result, many researchers argued that the methodology used in the natural sciences was perfectly suited for use in the social sciences. The effect of employing the scientific method and stressing empiricism was the distinction of sociology from theology, philosophy, and metaphysics. This also resulted in sociology being recognized as an empirical science. This early sociological approach, supported by August Comte, led to positivism, a methodological approach based on sociological naturalism. The goal of positivism, like the natural sciences, is prediction. But in the case of sociology, it is prediction of human behavior, which is a complicated proposition.
The goal of predicting human behavior was quickly realized to be a bit lofty. Scientists like Wilhelm Dilthey and Heinrich Rickert argued that the natural world differs from the social world, as human society has culture, unlike the societies of most other animals (e.g., the behavior of ants, wolves, etc. is primarily based on genetic instructions and is not passed from generation to generation through socialization). As a result, an additional goal was proposed for sociology. Max Weber and Wilhelm Dilthey introduced the concept of verstehen. The goal of verstehen is less to predict behavior than it is to understand behavior. Outside observers of a culture relate to an indigenous people on both the observer's and the observeds' own terms in order to comprehend the cultural conditions. While arriving at a verstehen-like understanding of a culture employs systematic methodologies like the positivistic approach of predicting human behavior, it is often a more subjective process.
The inability of sociology and other social sciences to perfectly predict the behavior of humans or to fully comprehend a different culture has led to the social sciences being labeled "soft sciences." While some might consider this label derogatory, in a sense it can be seen as an admission of the remarkable complexity of humans as social animals. Any animal as complex as humans is bound to be difficult to fully comprehend. What's more, humans, human society, and human culture are all constantly changing, which means the social sciences will constantly be works in progress.
The contrast between positivist sociology and the verstehen approach has been reformulated in modern sociology as a distinction between quantitative and qualitative methodological approaches, respectively. Quantitative sociology is generally a numerical approach to understanding human behavior. Surveys with large numbers of participants are aggregated into data sets and analyzed using statistics, allowing researchers to discern patterns in human behavior. Qualitative sociology generally opts for depth over breadth. The qualitative approach uses in-depth interviews, focus groups, or analysis of content sources (books, magazines, journals, TV shows, etc.) as the data source. These sources are then analyzed systematically to discern patterns and to arrive at a better understanding of human behavior.
Drawing a hard and fast distinction between quantitative and qualitative sociology is a bit misleading. The first step in all sciences is the development of a theory and the generation of testable hypotheses. While there are some individuals who begin analyzing data without a theoretical orientation to guide their analysis, most begin with a theoretical idea or question and gather data to test that theory. The second step is the collection of data. This is really where the two approaches differ. Quantitative sociology focuses on numerical representations of the research subjects (e.g., Do conservative Christian fathers spend more time in child care than secular fathers, when measured in hours?). Qualitative sociology focuses on the ideas found within the discourse and rhetoric of the research subjects (e.g., What is the narrative homosexual men use to explain their continued participation in religions that condemn their sexual orientation?). The goal of both approaches is to answer a question and/or test a theory.
Evaluation and Iteration
In the scientific pursuit of prediction and explanation, two relationships between variables are often confused: correlation and causation. Correlation refers to a relationship between two (or more) variables in which they change together. A correlation can be positive/direct or negative/inverse. A positive correlation means that as one variable increases (e.g., ice cream consumption) the other variable also increases (e.g., crime). A negative correlation is just the opposite; as one variable increases (e.g., socioeconomic status), the other variable decreases (e.g., infant mortality rates).
Causation refers to a relationship between two (or more) variables where one variable causes the other. In order for a variable to cause another, it must meet the following three criteria:
An example may help explain the difference. Ice cream consumption is positively correlated with incidents of crime.
Employing the scientific method outlined above, the reader should immediately question this relationship and attempt to discover an explanation. It is at this point that a simple yet noteworthy phrase should be introduced: correlation is not causation. If you look back at the three criteria of causation above, you will notice that the relationship between ice cream consumption and crime meets only one of the three criteria (they change together). The real explanation of this relationship is the introduction of a third variable: temperature. Ice cream consumption and crime increase during the summer months. Thus, while these two variables are correlated, ice cream consumption does not cause crime or vice versa. Both variables increase due to the increasing temperatures during the summer months.
It is important to not confound a correlation with a cause/effect relationship. It is often the case that correlations between variables are found but the relationship turns out to be spurious. Clearly understanding the relationship between variables is an important element of the scientific process.
Like the distinction drawn between positivist sociology and Verstehen sociology, there is often a distinction drawn between two types of sociological investigation: quantitative and qualitative.
Quantitative methods of sociological research approach social phenomena from the perspective that they can be measured and/or quantified. For instance, social class, following the quantitative approach, can be divided into different groups - upper-, middle-, and lower-class - and can be measured using any of a number of variables or a combination thereof: income, educational attainment, prestige, power, etc. Quantitative sociologists tend to use specific methods of data collection and hypothesis testing, including: experimental designs, surveys, secondary data analysis, and statistical analysis.
Qualitative methods of sociological research tend to approach social phenomena from the Verstehen perspective. They are used to develop a deeper understanding of a particular phenomenon. They also often deliberately give up on quantity - necessary for statistical analysis - in order to reach a depth in analysis of the phenomenon studied. Even so, qualitative methods can be used to propose relationships between variables. Qualitatively oriented sociologists tend to employ different methods of data collection and hypothesis testing, including: participant observation, interviews, focus groups, content analysis and historical comparison.
While there are sociologists who employ and encourage the use of only one or the other method, many sociologists see benefits in combining the approaches. They view quantitative and qualitative approaches as complementary. Results from one approach can fill gaps in the other approach. For example, quantitative methods could describe large or general patterns in society while qualitative approaches could help to explain how individuals understand those patterns.
Sociologists, like all humans, have values, beliefs, and even pre-conceived notions of what they might find in doing their research. Because sociologists are not immune to the desire to change the world, two approaches to sociological investigation have emerged. By far the most common is the objective approach advocated by Max Weber. Weber recognized that social scientists have opinions, but argued against the expression of non-professional or non-scientific opinions in the classroom. Weber took this position for several reasons, but the primary one outlined in his discussion of Science as Vocation is that he believed it is not right for a person in a position of authority (a professor) to force his/her students to accept his/her opinions in order for them to pass the class. Weber did argue that it was acceptable for social scientists to express their opinions outside of the classroom and advocated for social scientists to be involved in politics and other social activism. The objective approach to social science remains popular in sociological research and refereed journals because it refuses to engage social issues at the level of opinions and instead focuses intently on data and theories.
The objective approach is contrasted with the critical approach, which has its roots in Karl Marx's work on economic structures. Anyone familiar with Marxist theory will recognize that Marx went beyond describing society to advocating for change. Marx disliked capitalism and his analysis of that economic system included the call for change. This approach to sociology is often referred to today as critical sociology (see also action research). Some sociological journals focus on critical sociology and some sociological approaches are inherently critical (e.g., feminism, black feminist thought).
Ethical considerations are of particular importance to sociologists because of the subject of investigation - people. Because ethical considerations are of so much importance, sociologists adhere to a rigorous set of ethical guidelines. The most important ethical consideration of sociological research is that participants in sociological investigation are not harmed. While exactly what this entails can vary from study to study, there are several universally recognized considerations. For instance, research on children and youth always requires parental consent. Research on adults also requires informed consent and participants are never forced to participate. Confidentiality and anonymity are two additional practices that ensure the safety of participants when sensitive information is provided (e.g., sexuality, income, etc.). To ensure the safety of participants, most universities maintain an institutional review board (IRB) that reviews studies that include human participants and ensures ethical rigor.
It has not always been the case that scientists interested in studying humans have followed ethical principles in their research. Several studies that, when brought to light, led to the introduction of ethical principles guiding human subjects research and Institutional Review Boards to ensure compliance with those principles, are worth noting, including the Tuskegee syphilis experiment, in which 399 impoverished black men with syphilis were left untreated to track the progress of the disease and Nazi experimentation on humans. A recent paper by Susan M. Reverby found that such unethical experiments were more widespread than just the widely known Tuskegee study and that the US Government funded a study in which thousands of Guatemalan prisoners were infected with syphilis to determine whether they could be cured with penicillin. Ethical oversight in science is designed to prevent such egregious violations of human rights today.
Sociologists also have professional ethical principles they follow. Obviously honesty in research, analysis, and publication is important. Sociologists who manipulate their data are ostracized and will have their memberships in professional organizations revoked. Conflicts of interest are also frowned upon. A conflict of interest can occur when a sociologist is given funding to conduct research on an issue that relates to the source of the funds. For example, if Microsoft were to fund a sociologist to investigate whether users of Microsoft's product users are happier than users of open source software (e.g., Linux, Openoffice.org), the sociologist would need to disclose the source of the funding as it presents a significant conflict of interest. A comprehensive explanation of sociological guidelines is provided on the website of the American Sociological Association.
Having discussed the sociological approach to understanding society, it is worth noting the limitations of sociology. Because of the subject of investigation (society), sociology runs into a number of problems that have significant implications for this field of inquiry:
While it is important to recognize the limitations of sociology, sociology's contributions to our understanding of society have been significant and continue to provide useful theories and tools for understanding humans as social beings.
A scientific method depends upon a careful characterization of the subject of the investigation. While seeking the pertinent properties of the subject, this careful thought may also entail some definitions and observations; the observation often demands careful measurement and/or counting.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and a science, such as chemistry. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and development.
Measurements demand the use of operational definitions of relevant quantities (a.k.a. operationalization). That is, a scientific quantity is described or defined by how it is measured, as opposed to some more vague, inexact or idealized definition. The operational definition of a thing often relies on comparisons with standards: the operational definition of mass ultimately relies on the use of an artifact, such as a certain kilogram of platinum kept in a laboratory in France. In short, to operationalize a variable means creating an operational definition for a concept someone intends to measure. How this is done is very important as it should be done with enough precision that independent researchers should be able to use your description of your measurement and repeat it.
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, sex and gender are often used interchangeably in common discourse, but have distinct meanings in sociology. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
Measurements in scientific work are also usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities that are used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to limitations of the method used. Counts may only represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken (see the central limit theorem).
A hypothesis includes a suggested explanation of the subject. It will generally provide a causal explanation or propose some association between two variables. If the hypothesis is a causal explanation, it will involve at least one dependent variable and one independent variable.
Variables are measurable phenomena whose values can change (e.g., class status can range from lower- to upper-class). A dependent variable is a variable whose values are presumed to change as a result of the independent variable. In other words, the value of a dependent variable depends on the value of the independent variable. Of course, this assumes that there is an actual relationship between the two variables. If there is no relationship, then the value of the dependent variable does not depend on the value of the independent variable. An independent variable is a variable whose value is manipulated by the experimenter (or, in the case of non-experimental analysis, changes in the society and is measured). Perhaps an example will help clarify. In a study of the influence of gender on promotion, the independent variable would be gender/sex. Promotion would be the dependent variable. Change in promotion is hypothesized to be dependent on gender.
Scientists use whatever they can — their own creativity, ideas from other fields, induction, systematic guessing, etc. — to imagine possible explanations for a phenomenon under study. There are no definitive guidelines for the production of new hypotheses. The history of science is filled with stories of scientists claiming a flash of inspiration, or a hunch, which then motivated them to look for evidence to support or refute their idea.
A useful hypothesis will enable predictions, by deductive reasoning, that can be experimentally assessed. If results contradict the predictions, then the hypothesis under examination is incorrect or incomplete and requires either revision or abandonment. If results confirm the predictions, then the hypothesis might be correct but is still subject to further testing. Predictions refer to experimental designs with a currently unknown outcome. A prediction (of an unknown) differs from a consequence (which can already be known).
Once a prediction is made, an experiment is designed to test it. The experiment may seek either confirmation or falsification of the hypothesis.
Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. Detailed record keeping is essential, to aid in recording and reporting on the experimental results, and providing evidence of the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results.
The experiment's integrity should be ascertained by the introduction of a control. Two virtually identical experiments are run, in only one of which the factor being tested is varied. This serves to further isolate any causal phenomena. For example in testing a drug it is important to carefully test that the supposed effect of the drug is produced only by the drug. Doctors may do this with a double-blind study: two virtually identical groups of patients are compared, one of which receives the drug and one of which receives a placebo. Neither the patients nor the doctor know who is getting the real drug, isolating its effects. This type of experiment is often referred to as a true experiment because of its design. It is contrasted with alternative forms below.
Once an experiment is complete, a researcher determines whether the results (or data) gathered are what was predicted. If the experimental conclusions fail to match the predictions/hypothesis, then one returns to the failed hypothesis and re-iterates the process - modifying one's theory or developing a new one. If the experiment appears successful - i.e. fits the hypothesis - the experimenter often will attempt to publish the results so that others (in theory) may reproduce the same experimental results, verifying the findings in the process.
An experiment is not an absolute requirement. In observation based fields of science actual experiments must be designed differently than for the classical laboratory based sciences. Due to ethical concerns and the sheer cost of manipulating large segments of society, sociologists often turn to other methods for testing hypotheses. In lieu of holding variables constant in laboratory settings, sociologists employ statistical techniques (e.g., regression) that allow them to control the variables in the analysis rather than in the data collection. For instance, in examining the effects of gender on promotions, sociologists may control for the effects of social class as this variable will likely influence the relationship. Unlike a true experiment where these variables are held constant in a laboratory setting, sociologists use statistical methods to hold constant social class (or, better stated, partial out the variance accounted for by social class) so they can see the relationship between gender and promotions without the interference of social class. Thus, while the true experiment is ideally suited for the performance of science, especially because it is the best method for deriving causal relationships, other methods of hypothesis testing are commonly employed in the social sciences.
The scientific process is iterative. At any stage it is possible that some consideration will lead the scientist to repeat an earlier part of the process. For instance, failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject.
It is also important to note that science is a social enterprise, and scientific work will become accepted by the community only if it can be verified. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. All scientific knowledge is in a state of flux, for at any time new evidence could be presented that contradicts a long-held hypothesis. For this reason, scientific journals use a process of peer review, in which scientists' manuscripts are submitted by editors of scientific journals to (usually one to three) fellow (usually anonymous) scientists familiar with the field for evaluation. The referees may or may not recommend publication, publication with suggested modifications, or, sometimes, publication in another journal. This serves to keep the scientific literature free of unscientific work, helps to cut down on obvious errors, and generally otherwise improves the quality of the scientific literature. Work announced in the popular press before going through this process is generally frowned upon. Sometimes peer review inhibits the circulation of unorthodox work, and at other times may be too permissive. The peer review process is not always successful, but has been very widely adopted by the scientific community.
The reproducibility or replication of scientific observations, while usually described as being very important in a scientific method, is actually seldom reported, and is in reality often not done. Referees and editors often reject papers purporting only to reproduce some observations as being unoriginal and not containing anything new. Occasionally reports of a failure to reproduce results are published - mostly in cases where controversy exists or a suspicion of fraud develops. The threat of failure to replicate by others, however, serves as a very effective deterrent for most scientists, who will usually replicate their own data several times before attempting to publish.
Sometimes useful observations or phenomena themselves cannot be reproduced. They may be rare, or even unique events. Reproducibility of observations and replication of experiments is not a guarantee that they are correct or properly understood. Errors can all too often creep into more than one laboratory.
Sociologists develop theories to explain social phenomena. A theory is a proposed relationship between two or more concepts. In other words, a theory is explanation for why a phenomenon occurs. An example of a sociological theory is the work of Robert Putnam on the decline of civic engagement. Putnam found that Americans involvement in civic life (e.g., community organizations, clubs, voting, religious participation, etc.) has declined over the last 40 to 60 years. While there are a number of factors that contribute to this decline (Putnam's theory is quite complex), one of the prominent factors is the increased consumption of television as a form entertainment. Putnam's theory proposes:
This element of Putnam's theory clearly illustrates the basic purpose of sociological theory: it proposes a relationship between two or more concepts. In this case, the concepts are civic engagement and television watching. The relationship is an inverse one - as one goes up, the other goes down. What's more, it is an explanation of one phenomenon with another: part of the reason why civic engagement has declined over the last several decades is because people are watching more television. In short, Putnam's theory clearly encapsulates the key ideas of a sociological theory.
Sociological theory is developed at multiple levels, ranging from grand theory to highly contextualized and specific micro-range theories. There are many middle-range and micro-range theories in sociology. Because such theories are dependent on context and specific to certain situations, it is beyond the scope of this text to explore each of those theories. The purpose of this chapter is to introduce some of the more well-known and most commonly used grand and middle-range theories in sociology.
Importance of Theory
In the theory proposed above, the astute reader will notice that the theory includes two components: The data, in this case the findings that civic engagement has declined and TV watching has increased, and the proposed relationship, that the increase in television viewing has contributed to the decline in civic engagement. Data alone are not particularly informative. If Putnam had not proposed a relationship between the two elements of social life, we may not have realized that television viewing does, in fact, reduce people's desire to and time for participating in civic life. In order to understand the social world around us, it is necessary to employ theory to draw the connections between seemingly disparate concepts.
Another example of sociological theorizing illustrates this point. In his now classic work, Suicide, Emile Durkheim was interested in explaining a social phenomenon, suicide, and employed both data and theory to offer an explanation. By aggregating data for large groups of people in Europe, Durkheim was able to discern patterns in suicide rates and connect those patterns with another concept (or variable): religious affiliation. Durkheim found that Protestants were more likely to commit suicide than were Catholics. At this point, Durkheim's analysis was still in the data stage; he had not proposed an explanation for the different suicide rates of the two groups. It was when Durkheim introduced the ideas of anomie and social solidarity that he began to explain the difference in suicide rates. Durkheim argued that the looser social ties found in Protestant religions lead to weaker social cohesion and reduced social solidarity. The higher suicide rates were the result of weakening social bonds among Protestants.
While Durkheim's findings have since been criticized, his study is a classic example of the use of theory to explain the relationship between two concepts. Durkheim's work also illustrates the importance of theory: without theories to explain the relationship between concepts, we would not be able to understand cause and effect relationships in social life. And to find the cause and effect relationship is the major component of the sociological theory.
Prominent Sociological Theories
As noted above, there are many theories in sociology. However, there are several broad theoretical perspectives that are prominent in the field (they are arguably paradigms). These theories are prominent because they are quite good at explaining social life. They are not without their problems, but these theories remain widely used and cited precisely because they have withstood a great deal of criticism.
As the dominant theories in sociology are discussed below, you might be inclined to ask, "Which of these theories is the best?" As is often the case in sociology, just because things are different doesn't mean one is better than another. In fact, it is probably more useful and informative to view these theories as complementary. One theory may explain one element of society better than another. Or, both may be useful for explaining social life. In short, all of the theories are correct in the sense that they offer compelling explanations for social phenomena.
Structural-Functionalism is a sociological theory that originally attempted to explain social institutions as collective means to meet individual biological needs (originally just functionalism). Later it came to focus on the ways social institutions meet social needs (structural-functionalism).
Structural-functionalism draws its inspiration primarily from the ideas of Emile Durkheim. Durkheim was concerned with the question of how societies maintain internal stability and survive over time. He sought to explain social cohesion and stability through the concept of solidarity. In more "primitive" societies it was mechanical solidarity, everyone performing similar tasks, that held society together. Durkheim proposed that such societies tend to be segmentary, being composed of equivalent parts that are held together by shared values, common symbols, or systems of exchanges. In modern, complex societies members perform very different tasks, resulting in a strong interdependence between individuals. Based on the metaphor of an organism in which many parts function together to sustain the whole, Durkheim argued that modern complex societies are held together by organic solidarity (think interdependent organs).
The central concern of structural-functionalism is a continuation of the Durkheimian task of explaining the apparent stability and internal cohesion of societies that are necessary to ensure their continued existence over time. Many functionalists argue that social institutions are functionally integrated to form a stable system and that a change in one institution will precipitate a change in other institutions. Societies are seen as coherent, bounded and fundamentally relational constructs that function like organisms, with their various parts (social institutions) working together to maintain and reproduce them. The various parts of society are assumed to work in an unconscious, quasi-automatic fashion towards the maintenance of the overall social equilibrium. All social and cultural phenomena are therefore seen as being functional in the sense of working together to achieve this state and are effectively deemed to have a life of their own. These components are then primarily analysed in terms of the function they play. In other words, to understand a component of society, one can ask the question, "What is the function of this institution?" A function, in this sense, is the contribution made by a phenomenon to a larger system of which the phenomenon is a part.
Thus, one can ask of education, "What is the function of education for society?" The answer is actually quite complex and requires a detailed analysis of the history of education (see, for instance, this article on the history of education), but one obvious answer is that education prepares individuals to enter the workforce. By delineating the functions of elements of society, of the social structure, we can better understand social life.
Durkheim's strongly sociological perspective of society was continued by Radcliffe-Brown. Following Auguste Comte, Radcliffe-Brown believed that the social constituted a separate level of reality distinct from both the biological and the inorganic (here non-living). Explanations of social phenomena therefore had to be constructed within this social level, with individuals merely being transient occupants of comparatively stable social roles. Thus, in structural-functionalist thought, individuals are not significant in and of themselves but only in terms of their social status: their position in patterns of social relations. The social structure is therefore a network of statuses connected by associated roles.
Structural-functionalism was the dominant perspective of sociology between World War II and the Vietnam War.
Structural-functionalism has been criticized for being unable to account for social change because it focuses so intently on social order and equilibrium in society. For instance, in the late 19th Century, higher education transitioned from a training center for clergy and the elite to a center for the conduct of science and the general education of the masses. In other words, education did not always serve the function of preparing individuals for the labor force (with the exception of the ministry and the elite). As structural-functionalism thinks about elements of social life in relation to their present function and not their past functions, structural-functionalism has a difficult time explaining why a function of some element of society might change or how such change occurs. However, structural-functionalism could, in fact, offer an explanation in this case. Also occurring in the 19th Century (though begun in the 18th) was the industrial revolution. The industrial revolution, facilitated by capitalism, was increasingly demanding technological advances to increase profit. Technological advances and advanced industry both required more educated workforces. Thus, as one aspect of society changed - the economy and production - it required a comparable change in the educational system, bringing social life back into equilibrium.
Another philosophical problem with the structural-functional approach is the ontological argument that society does not have needs as a human being does; and even if society does have needs they need not be met. The idea that society has needs like humans do is not a tenable position because society is only alive in the sense that it is made up of living individuals. Thus, society cannot have wants and/or needs like humans do. What's more, just because a society has some element in it at the present that does not mean that it must necessarily have that element. For instance, in the United Kingdom, religious service attendance has declined precipitously over the last 100 years. Today, less than 1 in 10 British attend religious service in a given week. Thus, while one might argue that religion has certain functions in British society, it is becoming apparent that it is not necessary for British society to function.
Another criticism often leveled at structural-functionalist theory is that it supports the status quo. According to some opponents, structural-functionalism paints conflict and challenge to the status quo as harmful to society, and therefore tends to be the prominent view among conservative thinkers.
Manifest and Latent Functions
Merton (1957) proposed a distinction between manifest and latent functions. Manifest functions are the intended functions of a phenomenon in a social system. Latent functions are the unintended functions of a phenomenon in a social system. An example of manifest and latent functions is education. The manifest purpose of public education is to increase the knowledge and abilities of the citizenry to prepare them to contribute in the workforce. A latent function of the public education system is the development of a hierarchy of the learned. The most learned are often also the most affluent. Thus, while education's manifest function is to empower all individuals to contribute to the workforce and society, it also limits some people by creating boundaries of entry into occupations.
A prominent sociological theory that is often contrasted with structural-functionalism is conflict theory. Conflict theory argues that society is not best understood as a complex system striving for equilibrium but rather as a competition. Society is made up of individuals competing for limited resources (e.g., money, leisure, sexual partners, etc.). Broader social structures and organizations (e.g., religions, government, etc.) reflect the competition for resources in their inherent inequalities; some people and organizations have more resources (i.e., power and influence) and use those resources to maintain their positions of power in society.
Conflict theory was developed in part to illustrate the limitations of structural-functionalism. The structural-functionalist approach argued that society tends toward equilibrium, focusing on stability at the expense of social change. This is contrasted with the conflict approach, which argues that society is constantly in conflict over resources. One of the primary contributions conflict theory presents over the structural-functional approach is that it is ideally suited for explaining social change, a significant problem in the structural-functional approach.
The following are three primary assumptions of modern conflict theory:
A heuristic device to help you think about society from a conflict perspective is to ask, "Who benefits from this element of society?" Using the same example as we did above, we can ask, "Who benefits from the current higher educational system in the U.S.?" The answer, of course, is the wealthy. Why? Because higher education in the U.S. is not free. Thus, the educational system often screens out poorer individuals not because they are unable to compete academically but because they cannot afford to pay for their education. Because the poor are unable to obtain higher education, this means they are also generally unable to get higher paying jobs which means they remain poor. This can easily translate into a vicious cycle of poverty. Thus, while the function of education is to educate the workforce, it also has built into it an element of conflict and inequality, favoring one group (the wealthy) over other groups (the poor). Thinking about education this way helps illustrate why both structural-functionalist and conflict theories are helpful in understanding how society works.
Conflict theory was elaborated in the United Kingdom by Max Gluckman and John Rex, in the United States by Lewis A. Coser and Randall Collins, and in Germany by Ralf Dahrendorf, all of whom were influenced by Karl Marx, Ludwig Gumplovicz, Vilfredo Pareto, Georg Simmel, and other founding fathers of European sociology.
Not surprisingly, the primary limitation of the social-conflict perspective is that it overlooks the stability of societies. While societies are in a constant state of change, much of the change is minor. Many of the broader elements of societies remain remarkably stable over time, indicating the structural-functional perspective has a great deal of merit.
As noted above, sociological theory is often complementary. This is particularly true of structural-functionalism and social-conflict theories. Structural-functionalism focuses on equilibrium and solidarity; conflict-theory focuses on change and conflict. Keep in mind that neither is better than the other; when combined, the two approaches offer a broader and more comprehensive view of society.
In contrast to the rather broad approach toward society of structural-functionalism and conflict theory, Symbolic Interactionism is a theoretical approach to understanding the relationship between humans and society. The basic notion of symbolic interactionism is that human action and interaction are understandable only through the exchange of meaningful communication or symbols. In this approach, humans are portrayed as acting as opposed to being acted upon.
The main principles of symbolic interactionism are:
This approach stands in contrast to the strict behaviorism of psychological theories prevalent at the time it was first formulated (in the 1920s and 1930s). According to Symbolic Interactionism, humans are distinct from infrahumans (lower animals) because infrahumans simply respond to their environment (i.e., a stimulus evokes a response or stimulus -> response) whereas humans have the ability to interrupt that process (i.e., stimulus -> cognition -> response). Additionally, infrahumans are unable to conceive of alternative responses to gestures. Humans, however, can. This understanding should not be taken to indicate that humans never behave in a strict stimulus -> response fashion, but rather that humans have the capability of not responding in that fashion (and do so much of the time).
This perspective is also rooted in phenomenological thought (see social constructionism and phenomenology). According to symbolic interactionism, the objective world has no reality for humans, only subjectively-defined objects have meaning. Meanings are not entities that are bestowed on humans and learned by habituation. Instead, meanings can be altered through the creative capabilities of humans, and individuals may influence the many meanings that form their society. Human society, therefore, is a social product.
Neurological evidence based on EEGs supports the idea that humans have a "social brain," that is, there are components of the human brain that govern social interaction. These parts of the brain begin developing in early childhood (the preschool years) and aid humans in understanding how other people think. In symbolic interactionism, this is known as "reflected appraisals" or "the looking glass self" and refers to our ability to think about how other people will think about us. A good example of this is when people try on clothes before going out with friends. Some people may not think much about how others will think about their clothing choices, but others can spend quite a bit of time considering what they are going to wear. And while they are deciding, the dialogue that is taking place inside their mind is usually a dialogue between their "self" (that portion of their identity that calls itself "I") and that person's internalized understanding of their friends and society (a "generalized other"). An indicator of mature socialization is when an individual quite accurately predicts how other people think about him/her. Such an individual has incorporated the "social" into the "self."
It should also be noted that symbolic interactionists advocate a particular methodology. Because they see meaning as the fundamental component of human and society interaction, studying human and society interaction requires getting at that meaning. Thus, symbolic interactionists tend to employ more qualitative rather than quantitative methods in their research.
The most significant limitation of the symbolic-interactionist perspective relates to its primary contribution: it overlooks macro social structures (e.g., norms, culture) as a result of focusing on micro-level interactions. Some symbolic interactionists, however, would counter that if role theory (see below) is incorporated into symbolic interactionism - which is now commonplace - this criticism is addressed.
Another more micro-oriented approach to understanding social life that also incorporates the more structural elements of society is Role Theory. Role theory posits that human behavior is guided by expectations held both by the individual and by other people. The expectations correspond to different roles individuals perform or enact in their daily lives, such as secretary, father, or friend. For instance, most people hold preconceived notions of the role expectations of a secretary, which might include: answering phones, making and managing appointments, filing paperwork, and typing memos. These role expectations would not be expected of a professional soccer player.
Individuals generally have and manage many roles. Roles consist of a set of rules or norms that function as plans or blueprints to guide behavior. Roles specify what goals should be pursued, what tasks must be accomplished, and what performances are required in a given scenario or situation. Role theory holds that a substantial proportion of observable, day-to-day social behavior is simply persons carrying out their roles, much as actors carry out their roles on the stage or ballplayers theirs on the field. Role theory is, in fact, predictive. It implies that if we have information about the role expectations for a specified status (e.g., sister, fireman, prostitute), a significant portion of the behavior of the persons occupying that position can be predicted.
What's more, role theory also argues that in order to change behavior it is necessary to change roles; roles correspond to behaviors and vice versa. In addition to heavily influencing behavior, roles influence beliefs and attitudes; individuals will change their beliefs and attitudes to correspond with their roles. For instance, someone over-looked for a promotion to a managerial position in a company may change their beliefs about the benefits of management by convincing him/herself that they didn't want the additional responsibility that would have accompanied the position.
Many role theorists see Role Theory as one of the most compelling theories bridging individual behavior and social structure. Roles, which are in part dictated by social structure and in part by social interactions, guide the behavior of the individual. The individual, in turn, influences the norms, expectations, and behaviors associated with roles. The understanding is reciprocal.
Role Theory includes the following propositions:
Role theory has a hard time explaining social deviance when it does not correspond to a pre-specified role. For instance, the behavior of someone who adopts the role of bank robber can be predicted - she will rob banks. But if a bank teller simply begins handing out cash to random people, role theory would be unable to explain why (though role conflict could be one possible answer; the secretary may also be a Marxist-Communist who believes the means of production should belong to the masses and not the bourgeoisie).
Another limitation of role theory is that it does not and cannot explain how role expectations came to be what they are. Role theory has no explanation for why it is expected of male soldiers to cut their hair short, but it could predict with a high degree of accuracy that if someone is a male soldier they will have short hair. Additionally, role theory does not explain when and how role expectations change.
An extension of role theory, impression management is both a theory and process. The theory argues that people are constantly engaged in controlling how others perceive them. The process refers to the goal-directed conscious or unconscious effort to influence the perceptions of other people by regulating and controlling information in social interaction. If a person tries to influence the perception of her or his own image, this activity is called self-presentation.
Erving Goffman (1959), the person most often credited with formally developing impression management theory, cast the idea in a dramaturgical framework. The basic idea is that individuals in face-to-face situations are like actors on a stage performing roles (see role theory above). Aware of how they are being perceived by their audience, actors manage their behavior so as to create specific impressions in the minds of the audience. Strategic interpersonal behavior to shape or influence impressions formed by an audience is not a new idea. Plato spoke of the "great stage of human life" and Shakespeare noted that "All the world is a stage, and all the men and women merely players".
Social constructionism is a school of thought introduced into sociology by Peter L. Berger and Thomas Luckmann with their 1966 book The Social Construction of Reality. Social constructionism aims to discover the ways that individuals and groups create their perceived reality. Social constructionism focuses on the description of institutions and actions and not on analyzing cause and effect. Socially constructed reality is seen as an on-going dynamic process; reality is re-produced by people acting on their interpretations of what they perceive to be the world external to them. Berger and Luckmann argue that social construction describes both subjective and objective reality - that is that no reality exists outside what is produced and reproduced in social interactions.
A clear example of social constructionist thought is, following Sigmund Freud and Émile Durkheim, religion. Religion is seen as a socially constructed concept, the basis for which is rooted in either our psyche (Freud) or man's need to see some purpose in life or worship a higher presence. One of the key theorists of social constructionism, Peter Berger, explored this concept extensively in his book, The Sacred Canopy.
Recently, some sociologists have been taking a different approach to sociological theory by employing an integrationist approach - combining micro- and macro-level theories to provide a comprehensive understanding of human social behavior. Numerous models could be presented in this vein. George Ritzer's Integration Model is a good example.
Ritzer proposes four highly interdependent elements in his sociological model: a macro-objective component (e.g., society, law, bureaucracy), a micro-objective component (e.g., patterns of behavior and human interaction), a macro-subjective component (e.g., culture, norms, and values), and a micro-subjective component (e.g., perceptions, beliefs). This model is of particular use in understanding society because it uses two axes: one ranging from objective (society) to subjective (culture and cultural interpretation); the other ranging from the macro-level (norms) to the micro-level (individual level beliefs).
The integration approach is particularly useful for explaining social phenomenon because it shows how the different components of social life work together to influence society and behavior.
If used for understanding a specific cultural phenomenon, like the displaying of abstract art in one's home, the integration model depicts the different influences on the decision. For instance, the model depicts that cultural norms can influence individual behavior. The model also shows that individual level values, beliefs, and behaviors influence macro-level culture. This is, in fact, part of what David Halle finds: while there are art consumption differences based on class, they are not predicted solely by class. Displayers of abstract art tend not only to belong to the upper-class, but also are employed in art-production occupations. This would indicate that there are multiple levels of influence involved in art tastes – both broad cultural norms and smaller level occupational norms in addition to personal preferences.
The sociological understanding of societal development relies heavily upon the work of Gerhard Lenski. Lenski outlined some of the more commonly seen organizational structures in human societies. Classifications of human societies can be based on two factors: (1) the primary means of subsistence and (2) the political structure. This chapter focuses on the subsistence systems of societies rather than their political structures.
While it is a bit far-reaching to argue that all societies will develop through the stages outlined below, it does appear that most societies follow such a route. Human groups begin as hunter-gatherers, move toward pastoralism and/or horticulturalism, develop toward an agrarian society, and ultimately end up undergoing a period of industrialization (with the potential for developing a service industry following industrialization). Not all societies pass through every stage. Some societies have stopped at the pastoral or horticultural stage (e.g., Bedouin nomads), though these may be temporary pauses due to economic niches that will likely disappear over time. Some societies may also jump stages as a result of the introduction of technology from other societies. It is also worth noting that these categories aren't really distinct groups as there is often overlap in the subsistence systems used in a society. Some pastoralist societies also engage in some measure of horticultural food production and most industrial and post-industrial societies still have agriculture, just in a reduced capacity.
The Implications of Societal Development
As Western societies transitioned from pre-industrial economies based primarily on agriculture to industrialized societies in the 19th century, some people worried about the impacts such changes would have on society and individuals. Three early sociologists, Weber, Marx, and Durkheim, perceived different impacts of the Industrial Revolution on the individual and society and described those impacts in their work.
Weber and Rationalization
Marx and Alienation
Durkheim and Solidarity
The hunter-gatherer way of life is based on the exploitation of wild plants and animals. Consequently, hunter-gatherers are relatively mobile, and groups of hunter-gatherers have fluid boundaries and composition. Typically in hunter-gatherer societies men hunt larger wild animals and women gather fruits, nuts, roots, and other edible plant-based food and hunt smaller animals. Hunter-gatherers use materials available in the wild to construct shelters or rely on naturally occurring shelters like overhangs. Their shelters give them protection from predators and the elements.
The majority of hunter-gatherer societies are nomadic. It is difficult to be settled under such a subsistence system as the resources of one region can quickly become exhausted. Hunter-gatherer societies also tend to have very low population densities as a result of their subsistence system. Agricultural subsistence systems can support population densities 60 to 100 times greater than land left uncultivated, resulting in denser populations.
Hunter-gatherer societies also tend to have non-hierarchical social structures, though this is not always the case. Because hunter-gatherers tend to be nomadic, they generally do not have the possibility to store surplus food. As a result, full-time leaders, bureaucrats, or artisans are rarely supported by hunter-gatherer societies. The hierarchical egalitarianism in hunter-gatherer societies tends to extend to gender-based egalitarianism as well. Although disputed, many anthropologists believe gender egalitarianism in hunter-gatherer societies stems from the lack of control over food production, lack of food surplus (which can be used for control), and an equal gender contribution to kin and cultural survival.
Archeological evidence to date suggests that prior to 13,000BCE, all human beings were hunter-gatherers. While declining in number, there are still some hunter-gatherer groups in existence today. Such groups are found in the Arctic, tropical rainforests, and deserts where other forms of subsistence production are impossible or too costly. In most cases these groups do not have a continuous history of hunting and gathering; in many cases their ancestors were agriculturalists who were pushed into marginal areas as a result of migrations and wars. Examples of hunter-gatherer groups still in existence include:
The line between agricultural and hunter-gatherer societies is not clear cut. Many hunter-gatherers consciously manipulate the landscape through cutting or burning useless (to them) plants to encourage the growth and success of those they consume. Most agricultural people also tend to do some hunting and gathering. Some agricultural groups farm during the temperate months and hunt during the winter.
A pastoralist society is a society in which the primary means of subsistence is domesticated livestock. It is often the case that, like hunter-gatherers, pastoralists are nomadic, moving seasonally in search of fresh pastures and water for their animals. Employment of a pastoralist subsistence system often results in greater population densities and the development of both social hierarchies and divisions in labor as it is more likely there will be a surplus of food.
Pastoralist societies still exist. For instance, in Australia, the vast semi-arid areas in the interior of the country contain pastoral runs called sheep stations. These areas may be thousands of square kilometers in size. The number of livestock allowed in these areas is regulated in order to reliably sustain them, providing enough feed and water for the stock. Other examples of pastoralists societies still in existence include:
Horticulturalist societies are societies in which the primary means of subsistence is the cultivation of crops using hand tools. Like pastoral societies, the cultivation of crops increases population densities and, as a result of food surpluses, allows for a division of labor in society.
Horticulture differs from agriculture in that agriculture employs animals, machinery, or some other non-human means to facilitate the cultivation of crops while horticulture relies solely on humans for crop cultivation.
Agrarian societies are societies in which the primary means of subsistence is the cultivation of crops using a mixture of human and non-human means (i.e., animals and/or machinery). Agriculture is the process of producing food, feed, fiber, and other desired products by the cultivation of plants and the raising of domesticated animals (livestock). Agriculture can refer to subsistence agriculture or industrial agriculture.
Subsistence agriculture is agriculture carried out for the production of enough food to meet just the needs of the agriculturalist and his/her family. Subsistence agriculture is a simple, often organic, system using saved seed native to the ecoregion combined with crop rotation or other relatively simple techniques to maximize yield. Historically most farmers were engaged in subsistence agriculture and this is still the case in many developing nations.
In developed nations a person using such simple techniques on small patches of land would generally be referred to as a gardener; activity of this type would be seen more as a hobby than a profession. Some people in developed nations are driven into such primitive methods by poverty. It is also worth noting that large scale organic farming is on the rise as a result of a renewed interest in non-genetically modified and pesticide free foods.
In developed nations, a farmer or industrial agriculturalist is usually defined as someone with an ownership interest in crops or livestock, and who provides labor or management in their production. Farmers obtain their financial income from the cultivation of land to yield crops or the commercial raising of animals (animal husbandry), or both. Those who provide only labor but not management and do not have ownership are often called farmhands, or, if they supervise a leased strip of land growing only one crop, as sharecroppers.
Agriculture allows a much greater density of population than can be supported by hunting and gathering and allows for the accumulation of excess product to keep for winter use or to sell for profit. The ability of farmers to feed large numbers of people whose activities have nothing to do with material production was the crucial factor in the rise of surplus, specialization, advanced technology, hierarchical social structures, inequality, and standing armies.
Development of Horticulture and Agriculture
An industrial society is a society in which the primary means of subsistence is industry. Industry is a system of production focused on mechanized manufacturing of goods. Like agrarian societies, industrial societies increase food surpluses, resulting in more developed hierarchies and significantly more division of labor.
The division of labor in industrial societies is often one of the most notable elements of the society and can even function to re-organize the development of relationships. Whereas relationships in pre-industrial societies were more likely to develop through contact at one's place of worship or through proximity of housing, industrial society brings people with similar occupations together, often leading to the formation of friendships through one's work.
When capitalised, The Industrial Revolution refers to the first known industrial revolution, which took place in Europe during the 18th and 19th centuries. What is some times referred to as The Second Industrial Revolution describes later, somewhat less dramatic changes resulting from the widespread availability of electric power and the internal-combustion engine. Many developing nations began industrialisation under the influence of either the United States or the USSR during the Cold War.
Today, industry makes up only a relatively small percentage of highly developed countries' workforce (see the pie chart above), in large part due to advanced mechanization. The use of machines and robots to facilitate manufacturing reduces the number of people required to work in industry by increasing their efficiency. As a result, a single worker can produce substantially more goods in the same amount of time today than they used to be able to produce. This has also resulted in a transition in most highly developed countries into a post-industrial or service-oriented economy.
A post-industrial society is a society in which the primary means of subsistence is derived from service-oriented work, as opposed to agriculture or industry. It is important to note here that the term post-industrial is still debated in part because it is the current state of society; it is difficult to name a phenomenon while it is occurring.
Post-industrial societies are often marked by:
Most highly developed countries are now post-industrial in that the majority of their workforce works in service-oriented industries, like finance, healthcare, education, or sales, rather than in industry or agriculture. This is the case in the U.S., as depicted in the pie chart above.
Post-industrial society is occasionally used critically by individuals seeking to restore or return to industrial development. Increasingly, however, individuals and communities are viewing abandoned factories as sites for new housing and shopping. Capitalists are also realizing the recreational and commercial development opportunities such locations offer.
As noted throughout the discussion of societal development, changes in the social structure of a society - in this case the primary means of subsistence - also affect other aspects of society. For instance, as hunters and gatherers make the transition into pastoralism and horticulture, they also develop a surplus in food stuffs. While it is common for people in the developed world today to have lots of surplus food, we rarely consider just how important that extra food is. To begin with, once a society has surplus food, that means more of their children will survive into adulthood. Additionally, as food yields increase in agricultural societies, smaller percentages of the population are required to produce the food for the rest of the population. This frees up those people not engaged in food production to specialize in other areas, like clothing or housing production. This results in specialists: some people become experts in growing crops or raising livestock while others become experts in clothing production, metal-working, home construction, etc. That specialization leads to rapid increases in technology as people are freed from having to spend the majority of their time finding or growing their food and can then spend their time improving at their speciality. The relationship between surplus and technology may not seem obvious, initially, but surplus is clearly the forerunner of technological development.
This is illustrated in the diagram to the right. The diagram shows societal development along the top and the implications of societal development along the bottom. The arrows running between the two rows illustrate the fact that these relationships are very complex. For instance, specialization not only results from agriculture but also from denser populations and surplus and helps spur industry. The point being, these are interdependent aspects of societal development that co-evolve.
One additional outcome of surplus that is included in the diagram is inequality. Inequality will be discussed in much greater detail later in this book, but it is important to note that as soon as there is surplus, there will be greater surplus for some people. Those with more surplus have an economic advantage relative to those with less surplus as they have greater bargaining power - et voilà, social inequality is born.
Horticulture and agriculture as types of subsistence developed among humans somewhere between 10,000 and 8,000 B.C.E. in the Fertile Crescent region of the Middle East. The reasons for the development of horticulture and agriculture are debated but may have included climate change and the accumulation of food surplus for competitive gift-giving. Most certainly there was a gradual transition from hunter-gatherer to agricultural economies after a lengthy period when some crops were deliberately planted and other foods were gathered from the wild. In addition to the emergence of farming in the Fertile Crescent, agriculture appeared by at least 6,800 B.C.E. in East Asia (rice) and, later, in Central and South America (maize and squash). Small scale agriculture also likely arose independently in early Neolithic contexts in India (rice) and Southeast Asia (taro).
Full dependency on domestic crops and animals (i.e. when wild resources contributed a nutritionally insignificant component to the diet) did not occur until the Bronze Age. If the operative definition of agriculture includes large scale intensive cultivation of land, i.e., mono-cropping, organised irrigation, and use of a specialized labor force, the title "inventors of agriculture" would fall to the Sumerians, starting around 5,500 B.C.E.
By the early 1800s agricultural practices, particularly careful selection of hardy strains and cultivars, had so improved that yield per land unit was many times that seen in the Middle Ages and before, especially in the largely virgin lands of North and South America.
In the world, the use of crop breeding, better management of soil nutrients, and improved weed control have greatly increased yields per unit area. At the same time, the use of mechanization has decreased labor input. The developing world generally produces lower yields, having less of the latest science, capital, and technology base. More people in the world are involved in agriculture as their primary economic activity than in any other, yet it only accounts for four percent of the world's GDP. The rapid rise of mechanization in the 20th century, especially in the form of the tractor, reduced the necessity of humans performing the demanding tasks of sowing, harvesting, and threshing. With mechanization, these tasks could be performed with a speed and on a scale barely imaginable before. These advances have resulted in a substantial increase in the yield of agricultural techniques that have also translated into a decline in the percentage of populations in developed countries that are required to work in agriculture to feed the rest of the population. As the pie chart below indicates, less than 2% of Americans are employed in agriculture today and produce sufficient food to feed the other 98% of Americans.
Max Weber was particularly concerned about the rationalization and bureaucratization of society stemming from the Industrial Revolution and how these two changes would affect humanity's agency and happiness. As Weber understood society, particularly during the industrial revolution of the late 19th century in which he lived, society was being driven by the passage of rational ideas into culture which, in turn, transformed society into an increasingly bureaucratic entity. Bureaucracy is a type of organizational or institutional management that is, as Weber understood it, rooted in legal-rational authority. Bureaucracy is a complex means of managing life in social institutions that includes rules and regulations, patterns and procedures that both are designed to simplify the functioning of complex organizations. An example of bureaucracy would be the forms used to pay one's income taxes - they require specific information and procedures to fill them out. Included in that form, however, are countless rules and laws the dictate what can and can't be tied into one's taxes. Thus, bureaucracy simplifies the process of paying one's taxes by putting the process into a formulaic structure, but simultaneously complicates it by adding rules and regulations that govern the procedure. Weber did believe bureaucracy was the most rational form of institutional governance, but because Weber viewed rationalization as the driving force of society, he believed bureaucracy would increase until it ruled society. Society, for Weber, would become almost synonymous with bureaucracy.
As Weber did not see any alternative to bureaucracy, he believed it would ultimately lead to an iron cage: there would be no way to get out of it. Weber viewed this as a bleak outcome that would affect individuals' happiness as they would be forced to function in a highly rational society with rigid rules and norms without the possibility to change it. Because Weber could not envision other forces influencing the ultimate direction of society - the exception being temporary lapses into non-bureaucracy spurred by charismatic leaders - he saw no cure for the iron cage of rationality. Society would become a large bureaucracy that would govern people's lives. Weber was unable to envision a solution to his iron cage of bureaucracy dilemma. Since a completely rational society was inevitable and bureaucracy was the most rational form of societal management, the iron cage, according to Weber, does not have a solution.
Karl Marx took a different perspective on the impact of the Industrial Revolution on society and the individual. In order to understand Marx's perspective, however, it is necessary to understand how Marx perceived happiness. According to Marx, species being (or happiness) is the pinnacle of human nature. Species being is understood to be a type of self-realization or self-actualization brought about by meaningful work. But in addition to engaging in meaningful work, self-actualized individuals must also own the products of their labors and have the option of doing what they will with those products.
In a capitalist society (which was co-evolved with the Industrial Revolution), rather than owning the fruits of their labors, the proletariat or working class owns only their labor power, not the fruits of their labors (i.e., the results of production). The capitalists or bourgeoisie employ the proletariat for a living wage, but then keep the products of the labor. As a result, the proletariat is alienated from the fruits of its labor – they do not own the products they produce, only their labor power. Because Marx believed species being to be the goal and ideal of human nature and that species being could only be realized when individuals owned the results of their labors, Marx saw capitalism as leading toward increasingly unhappy individuals; they would be alienated from the results of their production and therefore would not be self-realized.
But the alienation from the results of their production is just one component of the alienation Marx proposed. In addition to the alienation from the results of production, the proletariat is also alienated from each other under capitalism. Capitalists alienate the proletariat from each other by forcing them to compete for limited job opportunities. Job opportunities are limited under capitalism in order for capitalists to keep wages down; without a pool of extraneous workers, capitalists would have to meet the wage demands of their workers. Because they are forced to compete with other members of the proletariat, workers are alienated from each other, compounding the unhappiness of the proletariat.
While Marx did have a solution to the problem of alienation, he seldom discussed it in detail. Marx's proposed solution was for the proletariat to unite and through protests or revolution (or legislation in democratic nations) overthrow the bourgeoisie and institute a new form of government – communism. This form of government would be based on communally owned and highly developed means of production and self-governance. The means of production would be developed – through capitalism – to the point that everyone in society would have sufficient 'free' time to allow them to participate in whatever governmental decisions needed to be made for the community as a whole. By re-connecting the individual with the fruits of their labor and empowering them toward true self-governance, species being would be realized and happiness would be returned.
Two additional comments are in order here. First, the economic systems that developed in The Soviet Union and China - as well as other parts of the world - was not the communism envisioned by Marx. Rather, they had achieved a form of socialism, what Marx called the stage between capitalism and communism. Second, Marx believed capitalism, while harmful to species being, was necessary to advance the means of production to a stage where communism (as he envisioned it) could be realized. Thus, while Marx was highly critical of capitalism, he also recognized its utility in developing the means of production.
Durkheim's view of society and the changes it was undergoing as a result of industrialization also led him to believe unhappiness was a possible outcome. Durkheim believed that an important component of social life was social solidarity, which is understood as a sense of community. In his classic study, Suicide, Durkheim argued that one of the root causes of suicide was a decrease in social solidarity – termed anomie (French for chaos) by Durkheim. Durkheim also argued that the increasing emphasis on individualism found in Protestant religions – in contrast to Catholicism – contributed to an increase in anomie, which resulted in higher suicide rates among Protestants.
In another work, The Division of Labor in Society, Durkheim proposed that pre-industrial societies maintained their social solidarity through a mechanistic sense of community and through their religious affiliations. Most people were generalists in their work – they farmed and created their own tools and clothing. Because they were alike in their generality, they were also more likely to share a sense of community, which Durkheim saw as an important component of happiness. In addition to their similarity in occupations, many individuals belonged to the same religious groups, which also fostered a sense of solidarity.
In industrializing societies, Durkheim recognized the inevitability of specialization. By definition, specialization means that individuals are going to have dissimilar occupations. This specialization would also affect religion. In industrial societies, religion would become just one aspect of lives that were increasingly divided into compartments – home, family, work, recreation, religion, etc.
Durkheim believed there were two components that would alleviate the decreasing social solidarity in industrializing societies: organic solidarity and conscientious attempts to find camaraderie through one's place of employment. Whereas social solidarity was maintained in pre-industrial societies through a mechanistic sense of similarity and dependence along with communal religious affiliations, in industrialized societies, social solidarity would be maintained by the interdependence of specialists on one another. If one individual specialized in treating the injured or ill, they would not have time to raise crops or otherwise produce food. Doctors would become dependent on farmers for their food while farmers would become dependent on doctors for their healthcare. This would force a type of organic solidarity — organic in the sense that the parts were interdependent like the organs of an animal are interdependent for their survival.
In addition to the inevitable interdependence a specialized society would warrant, Durkheim believed that a conscientious effort to develop and foster friendships would transition from a religious brotherhood to friendships developed at one's place of employment. Specialized individuals would have a great deal in common with their co-workers and, like members of the same religious congregations in pre-industrial societies, co-workers would be able to develop strong bonds of social solidarity through their occupations. Thus, for Durkheim, the answer to the decrease in mechanistic solidarity and the increasing anomie was organic solidarity and solidarity pursued within one's speciality occupation.
The simplest way to think about culture is to think about the distinction between nature (our biology and genetics) and nurture (our environment and surroundings that also shape our identities). Because of our biology and genetics, we have a particular form and we have certain abilities. But our biological nature does not exclusively determine who we are. For that, we need culture. Culture is the non-biological or social aspects of human life, basically anything that is learned by humans is part of culture.
The two avatars to the right help illustrate this idea. The avatar wearing nothing but shorts comes close to representing nothing but nature. The form of the avatar reveals several things about this person, but they are not necessarily tied to a specific culture. For instance, the fact that he has lighter colored skin suggests he has Caucasian ancestry, but that is biological, not social. Otherwise, there is very little about this avatar that reflects culture (the exceptions are his shorts and hair stylings, which do, in fact, reflect culture). The avatar wearing the colorful vest and pants stands in stark contrast to the other avatar. This second avatar is reflective of a particular culture. The colors of the vest - red, white, and blue - in the specific pattern they are in (stars and stripes) suggests this avatar is in some way associated with the United States of America. The cut of the avatar's top and pants suggest a particular time period - the late 1960s or early 1970s. The backdrop, with the words Rock the Vote scrawled across it, also suggest something about the avatar - perhaps that he is a musician participating in the Rock the Vote movement. Additionally, the avatar's hairstyle, dreadlocks, also suggest something about this second avatar as dreadlocks are prominent only in certain sub-cultures. In short, the first avatar is mostly a-cultural or lacking in culture while the second avatar is heavily enmeshed in culture.
Generally speaking, the following elements of social life are considered to be representative of human culture: "stories, beliefs, media, ideas, works of art, religious practices, fashions, rituals, specialized knowledge, and common sense" (p. xvi).
Yet, examples of culture do not, in themselves, present a clear understanding of the concept of culture; culture is more than the object or behavior. Culture also includes,
To summarize, culture encompasses objects and symbols, the meaning given to those objects and symbols, and the norms, values, and beliefs that pervade social life. "The definition is understood to include two elements - that which differentiates one group or society from others and the concept of acquired or learned behavior". (p. 43)
The Changing Concept of Culture
Level of Abstraction
The Artificiality of Cultural Categorization
A subculture is a culture shared and actively participated in by a minority of people within a broader culture. A culture often contains numerous subcultures. Subcultures incorporate large parts of the broader cultures of which they are part, but in specifics they may differ radically. Some subcultures achieve such a status that they acquire a name of their own. Examples of subcultures could include: bikers, military culture, and Star Trek fans (trekkers or trekkies).
A counterculture is a subculture with the addition that some of its beliefs, values, or norms challenge or even contradict those of the main culture of which it is part. Examples of countercultures in the U.S. could include: the hippie movement of the 1960s, the green movement, polygamists, and and feminist groups.
Subcultures bring together like-minded individuals who feel neglected by societal standards and allow them to develop a sense of identity. Subcultures can be distinctive because of the age, ethnicity, class, location, and/or gender of the members. The qualities that determine a subculture as distinct may be linguistic, aesthetic, religious, political, sexual, geographical, or a combination of factors. Members of a subculture often signal their membership through a distinctive and symbolic use of style, which includes fashions, mannerisms, and argot.
Ethnocentrism is the tendency to look at the world primarily from the perspective of one's own culture. Many claim that ethnocentrism occurs in every society; ironically, ethnocentrism may be something that all cultures have in common.
The term was coined by William Graham Sumner, a social evolutionist and professor of Political and Social Science at Yale University. He defined it as, "The sentiment of cohesion, internal comradeship, and devotion to the in-group, which carries with it a sense of superiority to any out-group and readiness to defend the interests of the in-group against the out-group." Ethnocentrism often entails the belief that one's own race or ethnic group is the most important and/or that some or all aspects of its culture are superior to those of other groups. Within this ideology, individuals will judge other groups in relation to their own particular ethnic group or culture, especially with concern to language, behaviour, customs, and religion. It also involves an incapacity to acknowledge that cultural differentiation does not imply inferiority of those groups who are ethnically distinct from one's own.
Sociologists study ethnocentrism because of its role in various elements of social life, ranging from politics to terrorism. This is also an area where sociologists often become advocates as they attempt to reveal ethnocentric biases to those who hold them with the aim of helping people realize that such biases are seldom beneficial to social solidarity and peaceful human relations.
Cultural relativism is the belief that the concepts and values of a culture cannot be fully translated into, or fully understood in, other languages; that a specific cultural artifact (e.g. a ritual) has to be understood in terms of the larger symbolic system of which it is a part.
An example of cultural relativism might include slang words from specific languages (and even from particular dialects within a language). For instance, the word tranquilo in Spanish translates directly to 'calm' in English. However, it can be used in many more ways than just as an adjective (e.g., the seas are calm). Tranquilo can be a command or suggestion encouraging another to calm down. It can also be used to ease tensions in an argument (e.g., everyone relax) or to indicate a degree of self-composure (e.g., I'm calm). There is not a clear English translation of the word, and in order to fully comprehend its many possible uses a cultural relativist would argue that it would be necessary to fully immerse oneself in cultures where the word is used.
While there are numerous theoretical approaches employed to understand 'culture', this chapter uses just one model to illustrate how sociologists understand the concept.
The Function of Culture
Culture can also be seen to play a specific function in social life. According to Griswold, "The sociological analysis of culture begins at the premise that culture provides orientation, wards off chaos, and directs behavior toward certain lines of action and away from others." Griswold reiterates this point by explaining that, "Groups and societies need collective representations of themselves to inspire sentiments of unity and mutual support, and culture fulfills this need." In other words, culture can have a certain utilitarian function – the maintenance of order as the result of shared understandings and meanings.
The belief that culture is symbolically coded and can thus be taught from one person to another means that cultures, although bounded, can change. Cultures are both predisposed to change and resistant to it. Resistance can come from habit, religion, and the integration and interdependence of cultural traits. For example, men and women have complementary roles in many cultures. One sex might desire changes that affect the other, as happened in the second half of the 20th century in western cultures (see, for example, the women's movement), while the other sex may be resistant to that change (possibly in order to maintain a power imbalance in their favor).
Cultural change can have many causes, including: the environment, inventions, and contact with other cultures. For example, the end of the last ice age helped lead to the invention of agriculture. An invention that substantially changed culture was the development of the birth control pill, which changed women's attitudes toward sex. Prior to the introduction of the birth control pill, women were at a high risk of pregnancy as a result of sex. After the introduction of the pill, their risk of pregnancy was substantially reduced, increasing their willingness to engage in sexual activity outside of wedlock. Likewise, the introduction of the television substantially reduced American involvement in civic life.
Several understandings of how cultures change come from Anthropology. For instance, in diffusion theory, the form of something moves from one culture to another, but not its meaning. For example, the ankh symbol originated in Egyptian culture but has diffused to numerous cultures. It's original meaning may have been lost, but it is now used by many practitioners of New Age Religion as an arcane symbol of power or life forces.
Contact between cultures can also result in acculturation. Acculturation has different meanings, but in this context refers to replacement of the traits of one culture with those of another, such as what happened with many Native American Indians as Europeans took over their lands. Many Native Americans were acculturated into European cultural norms, from religion to how to raise children. Related processes on an individual level are assimilation and transculturation, both of which refer to adoption of a different culture by an individual.
Griswold outlined another sociological approach to cultural change. Griswold points out that it may seem as though culture comes from individuals, but there is also the larger, collective, and long-lasting culture that cannot have been the creation of single individuals as it predates and post-dates individual humans and contributors to culture. The author presents a sociological perspective to address this conflict,
Griswold suggests, then, that culture changes through the contextually dependent and socially situated actions of individuals; macro-level culture influences the individual who, in turn, can influence that same culture. The logic is a bit circular, but it illustrates how culture can change over time yet remain somewhat constant.
It is, of course, important to recognize here that Griswold is talking about cultural change and not the actual origins of culture (as in, "there was no culture and then, suddenly, there was"). Because Griswold does not explicitly distinguish between the origins of cultural change and the origins of culture, it may appear as though Griswold is arguing here for the origins of culture and situating these origins in society. This is neither accurate nor a clear representation of sociological thought on this issue. Culture, just like society, has existed since the beginning of humanity (humans being social and cultural beings). Society and culture co-exist because humans have social relations and meanings tied to those relations (e.g. brother, lover, friend). Culture as a super-phenomenon has no real beginning except in the sense that humans (homo sapiens) have a beginning. This, then, makes the question of the origins of culture moot – it has existed as long as we have, and will likely exist as long as we do.
How do sociologists study culture? One approach to studying culture falls under the label 'cultural sociology', which combines the study of culture with cultural understandings of phenomena. Griswold explains how cultural sociologists approach their research,
Cultural sociologists look for how people make meaning in their lives out of the different cultural elements that surround them. A particularly clear example of cultural sociology is the study of the Village-Northton by Elijah Anderson. Anderson is interested in a number of things in his book, but two cultural components stand out. First, Anderson is looking at the border of two culturally and socio-economically distinct neighborhoods. Because these two neighborhoods are distinct yet share a border, this research site provides numerous opportunities for the exploration of culture. Not surprisingly, cultural conflict is an optimal scenario for the exploration of culture and cultural interaction. Additionally, Anderson is interested in how individuals in these neighborhoods negotiate interpersonal interactions, especially when individuals from the Village (middle to upper-middle class and predominantly white) are forced to interact with members of the Northton area (lower class and poor blacks).
Anderson’s methodology is a combination of participant observation and interviews. But when viewed in light of the quote above by Griswold, it becomes apparent that Anderson’s focus in these interviews and observations is self-presentation. Anderson regularly describes the individuals he interviews and observes in light of their clothing, behavior, attitudes, beliefs, and opinions. As he interacts with more and more individuals, patterns begin to develop. Specifically, individuals dressed in certain outfits behave in similar ways. For instance, those dressed in business attire (even when walking their dogs) – the yuppies – have particular perspectives on the future of the Village: they are interested in increasing property values in order to maximize their investment. Another example of cultural significance of clothing is older black men who intentionally wear button-up shirts and ties because of the cultural symbolism of that particular outfit: it signifies to the cultural outsider that the wearer is refined and distinct from the athletic-suit-wearing drug dealers who control numerous Northton corners.
Ultimately, Anderson’s goal is to develop a sort of typology of streetwise individuals: people who can manage awkward and uncomfortable interpersonal interactions on the street in such a fashion that they emerge from the interactions unharmed. While he does develop a loose description of these types of individuals, the important part to understand here is how he explores these aspects of culture. First, he found a cultural border that presented cultural conflict. When individuals have to negotiate meaning publicly, it makes it much easier for the sociologist to tease out culture. Additionally, Anderson observed both the transmission of culture from generation to generation (i.e., socialization, but also the self-representation that is provided by cultural expressions (clothing, behavior, etc). Through years of observation, Anderson gained a familiarity with these elements of culture that allowed him to understand how they interacted.
Many people today think of culture in the way that it was thought of in Europe during the 18th and early 19th centuries. This concept of culture reflected inequalities within European societies and their colonies around the world. This understanding of culture equates culture with civilization and contrasts both with nature or non-civilization. According to this understanding of culture, some countries are more civilized than others, and some people are more cultured than others. Theorists like Matthew Arnold (1822-1888) believed that culture is simply that which is created by "the best that has been thought and said in the world" (p. 6). Anything that doesn't fit into this category is labeled as chaos or anarchy. From this perspective, culture is closely tied to cultivation, which is the progressive refinement of human behavior.
In practice, culture referred to elite goods and activities such as haute cuisine, high fashion or haute couture, museum-caliber art and classical music. The word cultured referred to people who knew about and took part in these activities. For example, someone who used culture in this sense might argue that classical music is more refined than music by working-class people, such as jazz or the indigenous music traditions of aboriginal peoples.
People who use culture in this way tend not to use it in the plural. They believe that there are not distinct cultures, each with their own internal logic and values, but rather only a single standard of refinement to which all groups are held accountable. Thus people who differ from those who believe themselves to be cultured in this sense are not usually understood as having a different culture; they are understood as being uncultured.
During the Romantic Era, scholars in Germany, especially those concerned with nationalism, developed a more inclusive notion of culture as worldview. That is, each ethnic group is characterized by a distinct and incommensurable world view. Although more inclusive, this approach to culture still allowed for distinctions between civilized and primitive or tribal cultures.
By the late 19th century, anthropologists had changed the concept of culture to include a wider variety of societies, ultimately resulting in the concept of culture outlined above - objects and symbols, the meaning given to those objects and symbols, and the norms, values, and beliefs that pervade social life.
This new perspective has also removed the evaluative element of the concept of culture and instead proposes distinctions rather than rankings between different cultures. For instance, the high culture of elites is now contrasted with popular or pop culture. In this sense, high culture no longer refers to the idea of being cultured, as all people are cultured. High culture simply refers to the objects, symbols, norms, values, and beliefs of a particular group of people; popular culture does the same.
Most social scientists today reject the cultured vs. uncultured concept of culture. Instead, social scientists accept and advocate the definition of culture outlined above as being the "nurture" component of human social life. Social scientists recognize that non-elites are as cultured as elites (and that non-Westerners are just as civilized); they simply have a different culture.
The Origins of Culture
Attentive to the theory of evolution, anthropologists assumed that all human beings are equally evolved, and the fact that all humans have cultures must in some way be a result of human evolution. They were also wary of using biological evolution to explain differences between specific cultures - an approach that either was a form of, or legitimized forms of, racism. Anthropologists believed biological evolution produced an inclusive notion of culture, a concept that anthropologists could apply equally to non-literate and literate societies, or to nomadic and to sedentary societies. They argued that through the course of their evolution, human beings evolved a universal human capacity to classify experiences, and encode and communicate them symbolically. Since these symbolic systems were learned and taught, they began to develop independently of biological evolution (in other words, one human being can learn a belief, value, or way of doing something from another, even if they are not biologically related). That this capacity for symbolic thinking and social learning is a product of human evolution confounds older arguments about nature versus nurture. Thus, Clifford Geertz argued that human physiology and neurology developed in conjunction with the first cultural activities, and Middleton (1990:17 n.27) concluded that human "instincts were culturally formed."
This view of culture argues that people living apart from one another develop unique cultures. However, elements of different cultures can easily spread from one group of people to another. Culture is dynamic and can be taught and learned, making it a potentially rapid form of adaptation to changes in physical conditions. Anthropologists view culture as not only a product of biological evolution but as a supplement to it; it can be seen as the main means of human adaptation to the natural world.
This view of culture as a symbolic system with adaptive functions, which varies from place to place, led anthropologists to conceive of different cultures as defined by distinct patterns (or structures) of enduring, although arbitrary, conventional sets of meaning, which took concrete form in a variety of artifacts such as myths and rituals, tools, the design of housing, and the planning of villages. Anthropologists thus distinguish between material culture and symbolic culture, not only because each reflects different kinds of human activity, but also because they constitute different kinds of data that require different methodologies to study.
This view of culture, which came to dominate anthropology between World War I and World War II, implied that each culture was bounded and had to be understood as a whole, on its own terms. The result is a belief in cultural relativism, which suggests that there are no "better" or "worse" cultures, just different cultures.
Recent research suggests that human culture has reversed the causal direction suggested above and influence human evolution. One well-known illustration of this is the rapid spread of genetic instructions that left on a gene that produces a protein that allows humans to digest lactose. This adaptation spread rapidly in Europe around 4,000 BCE with the domestication of mammals, as humans began harvesting their milk for consumption. Prior to this adaptation, the gene that produces a protein allowing for the digestion of lactose was switched after children were weaned. Thus, the change in culture - drinking milk from other mammals - eventually led to changes in human genetics. Genetics has, therefore, resulted in culture, which is now acting back on genetics.
Another element of culture that is important for a clear understanding of the concept is level of abstraction. Culture ranges from the concrete, cultural object (e.g., the understanding of a work of art) to micro-level interpersonal interactions (e.g., the socialization of a child by his/her parents) to a macro-level influence on entire societies (e.g., the Puritanical roots of the U.S. that can be used to justify the exportation of democracy – a lá the Iraq War). It is important when trying to understand the concept of culture to keep in mind that the concept can have multiple levels of meaning.
One of the more important points to understand about culture is that it is an artificial categorization of elements of social life. As Griswold puts it,
In the above quote, Griswold emphasizes that culture is distinct from society but affirms that this distinction is, like all classifications, artificial. Humans do not experience culture in a separate or distinct way from society. Culture and society are truly two-sides of a coin; a coin that makes up social life. Yet the distinction between the two, while artificial, is useful for a number of reasons. For instance, the distinction between culture and society is of particular use when exploring how norms and values are transmitted from generation to generation and answering the question of cultural conflict between people of different cultural backgrounds (say, the Japanese and Americans).
Socialization is a fundamental sociological concept, comprising a number of elements. While not every sociologist will agree which elements are the most important, or even how to define some of the elements of socialization, the elements outlined below should help clarify what is meant by socialization.
Goals of Socialization
Primary and Secondary Socialization
Broad and Narrow Socialization
The Importance of Socialization
Socialization, as a concept in social scientific research, has evolved over time. While the basic idea outlined above has been a component of most understandings of socialization, there have been quite a variety of definitions and theories of socialization. Some of these approaches are presented here as definitional variety is often informative.
Recent research suggests that human children are hard-wired to exactly imitate the roles of adults, including actions that are not pragmatic. This is referred to as "overimitation" and, while seemingly maladaptive from an evolutionary perspective, it is possible that this is one of the characteristics of humans that facilitates the transmission of culture from generation to generation.
Socialization as Joining Groups
Numerous examples of research on socialization could be presented in this section. One important area of socialization research involves differences in gender socialization, but much of that research is summarized in the chapter on gender. The following three research examples are interesting in that they explore both primary and secondary socialization and do so from varying perspectives.
Socialization and Social Class
Socialization and Death Preparation
Do College Preparation Classes Make a Difference?
Arnett, in presenting a new theoretical understanding of socialization (see below), outlined what he believes to be the three goals of socialization:
In short, socialization is the process that prepares humans to function in social life. It should be re-iterated here that socialization is culturally relative - people in different cultures are socialized differently. This distinction does not and should not inherently force an evaluative judgement. Socialization, because it is the adoption of culture, is going to be different in every culture. Socialization, as both process or an outcome, is not better or worse in any particular culture.
It should also be noted that, while socialization is a key sociological process in the development of individuals who can function in human society, not every aspect of human behavior is learned. For instance, there is evidence that most children have innate empathy for individuals who are wilfully injured and consider it wrong. Thus, some aspects of human behavior that one might believe are learned, like empathy and morals, may, in fact, be biologically determined. To what extent human behavior is biologically determined vs. learned is still an open question in the study of human behavior.
Socialization is a life process, but is generally divided into two parts: Primary socialization takes place early in life, as a child and adolescent. Secondary socialization refers to the socialization that takes place throughout one's life, both as a child and as one encounters new groups that require additional socialization. While there are scholars who argue that only one or the other of these occurs, most social scientists tend to combine the two, arguing that the basic or core identity of the individual develops during primary socialization, with more specific changes occurring later - secondary socialization - in response to the acquisition of new group memberships and roles and differently structured social situations. The need for later life socialization may stem from the increasing complexity of society with its corresponding increase in varied roles and responsibilities.
Mortimer and Simmons outline three specific ways these two parts of socialization differ:
Socialization is, of course, a social process. As such, it involves interactions between people. Socialization, as noted in the distinction between primary and secondary, can take place in multiple contexts and as a result of contact with numerous groups. Some of the more significant contributors to the socialization process are: parents, friends, schools, siblings, and co-workers. Each of these groups include a culture that must be learned and to some degree appropriated by the socializee in order to gain admittance to the group.
Arnett proposed an interesting though seldom used distinction in types of socialization. Arnett distinguishes between broad and narrow socialization:
These distinctions correspond to Arnett's definition of socialization, which is:
Arnett explains that his understanding of socialization should not be understood as having just two options, broad or narrow. Instead, the author argues that socialization can be broad or narrow within each of the seven socializing forces he outlines (e.g., family, friends, etc.). Because each force can be either broad or narrow, there is a wide variety of possible broad/narrow socialization combinations. Finally, Arnett notes two examples where his distinction is relevant. First, Arnett argues that there are often differences in socialization by gender. Where these differences exist, argues Arnett, socialization tends to be narrower for women than for men. Arnett also argues that Japanese socialization is narrow as there is more pressure toward conformity in that culture. Arnett argues that this may account for the lower crime rates in Japan.
Not all socialization is voluntary nor is all socialization successful. There are components of society designed specifically to resocialize individuals who were not successfully socialized to begin with. For instance, prisons and mental health institutions are designed to resocialize people who are deemed to have not been successfully socialized. Depending on the degree of isolation and resocialization that takes place in a given institution, some of these institutions are labeled total institutions. In his classic study of total institutions, Erving Goffman gives the following characteristics of total institutions:
The most common examples of total institutions include mental hospitals, prisons, and military boot camps, though there are numerous other institutions that could be considered total institutions as well. The goal of total institutions is to facilitate a complete break with one's old life in order for the institution to resocialize the individual into a new life.
Mortimer and Simmons note a difference in socialization methodologies in different types of institutions. When the goal of an institution is socialization (primary or secondary), the institution tends to use normative pressures. When the goal of an institution is resocialization of deviants, coercion is frequently involved.
One of the most common methods used to illustrate the importance of socialization is to draw upon the few unfortunate cases of children who were, through neglect, misfortune, or wilful abuse, not socialized by adults while they were growing up. Such children are called "feral" or wild. Some feral children have been confined by people (usually their own parents); in some cases this child abandonment was due to the parents' rejection of a child's severe intellectual or physical impairment. Feral children may have experienced severe child abuse or trauma before being abandoned or running away. Others are alleged to have been brought up by animals; some are said to have lived in the wild on their own. When completely brought up by non-human animals, the feral child exhibits behaviors (within physical limits) almost entirely like those of the particular care-animal, such as its fear of or indifference to humans.
Feral children lack the basic social skills which are normally learned in the process of socialization. For example, they may be unable to learn to use a toilet, have trouble learning to walk upright and display a complete lack of interest in the human activity around them. They often seem mentally impaired and have almost insurmountable trouble learning a human language. The impaired ability to learn language after having been isolated for so many years is often attributed to the existence of a critical period for language learning, and taken as evidence in favor of the Critical Period Hypothesis. It is very difficult to socialize a child who became isolated at a very young age into a relatively normal member of society and such individuals often need close care throughout their lives.
There are, unfortunately, a number of examples of such children that have been well-documented, including:
Socialization has addressed the problem of individual adjustment to society. In all of the approaches outlined above, socialization has, in one way or another, referred to the idea that society shapes its members toward compliance and cooperation with societal requirements. In order to reduce confusion, develop a research methodology for measuring socialization, and potentially lead to the comparability of research findings from different studies, Long and Hadden proposed a revised understanding of socialization. Rather than referring to a vague adoption or learning of culture, Long and Hadden reframed socialization as "the medium for transforming newcomers into bona fide members of a group." Before discussing some of the specifics of this approach, it may be useful to outline some of the critiques Long and Hadden present of earlier approaches to socialization.
According to Long and Hadden, many earlier approaches to socialization extended socialization to every part of human social life. As a result, everyone becomes both a socializing agent (socializer) and a novice (socializee) in all encounters with others. This conceptualization leaves socialization without a social home; it is all around but no place in particular. Another criticism of previous approaches is that they allowed socialization to include anything, and anything which is part of the process at one time may be excluded at another. With this conceptualization, any phenomenon may shift its status in the socialization process without changing its own composition or expression. In other words, socialization includes virtually everything, excludes almost nothing, and shifts with circumstance and outcomes. Additionally, previous approaches to socialization lacked specificity about the nature of socialization activity. Defining socialization by its outcomes made it unnecessary to stipulate the nature of the process conceptually. Socialization could be attributed to this or that but in order to truly understand what is taking place it is necessary to go beyond just pointing to socializing agents and specify what it is about those agents that is doing the socializing. Another serious drawback of earlier approaches is that they disregard the process component of socialization. Doing so limits the socialization concept to employment primarily as a post hoc interpretive category that is used to lend significance to findings defined and developed in other terms.
As a result of these criticisms, Long and Hadden found themselves presented with a two-fold task:
To accomplish this, Long and Hadden developed a new understanding of socialization, "socialization is the process of creating and incorporating new members of a group from a pool of newcomers, carried out by members and their allies". Under this understanding, the principal agents of socialization are certified and practicing members of the group to which novices are being socialized. It should be noted that certified here is only a shortened way of saying "a socially approved member of the group." Thus, Long and Hadden's revised understanding of socialization sees it as both the process and outcome of joining groups.
Ellis, Lee, and Peterson, developing a research agenda begun by Melvin L. Kohn, explored differences in how parents raise their children relative to their social class. Kohn found that lower class parents were more likely to emphasize conformity in their children whereas middle-class parents were more likely to emphasize creativity and self-reliance. Ellis et. al. proposed and found that parents value conformity over self-reliance in children to the extent that conformity superseded self-reliance as a criterion for success in their own endeavors. In other words, Ellis et. al. verified that the reason lower-class parents emphasize conformity in their children is because they experience conformity in their day-to-day activities. For example, factory work is far more about conforming than innovation.
Another study in this same area explored a slightly different component of this relationship. Erlanger was interested in a correlation between social class and physical violence. While he did not find a strong correlation indicating lower class individuals were more likely to employ physical violence in punishing their children, he did present evidence concerning several outdated propositions. Erlanger's findings include:
It should be noted that this is an older study and that more recent findings may have shed more light on these issues. It should also be noted that Erlanger readily points out when his findings are strongly supported or weakly supported by his data. It behooves the interested party to read his paper directly rather than rely on the summary above for the specific nuances.
Marshall interviewed a number of retirement home residents to explore how their environment influenced their thinking about death. In essence, Marshall was examining secondary socialization concerning mortality. Marshall found that a combination of relationships, behavioral changes, and retirement home culture contributed to a conception of death that was both accepting and courageous.
Residents of this particular retirement home found themselves with more time on their hands - to think about death - because they no longer had to care for their own homes. Additionally, they found themselves surrounded by people in a situation similar to their own: they were basically moving into the retirement home to prepare for death. The prevalence of elderly people facilitated discussions of death, which also helped socialize the residents into their acceptance of mortality. Finally, the retirement home community encouraged a culture of life and fulfilment in part to counter-act the frequency of death. Some residents calculated there was one death per week in the retirement home. In light of such bad numbers, it was important to the success of the community to maintain a positive culture that embraced life yet accepted death. In summary, Marshall found that numerous factors contributed to the socialization of residents into a positive lifestyle that was also accepting of and preparatory for their impending deaths.
Rosenbaum was interested in the effects of high school tracks on IQ. High school tracks are the different levels or types of courses students can take; for instance, many high schools now include college preparation tracks and general education tracks. Rosenbaum's hypothesis was that students who followed the lower tracks (non college-preparation) would score lower on IQ tests over time than would students who followed the higher tracks (college-preparation). Considering that school is one of the primary contributors to socialization, it makes sense that participation in a given track can also result in the adoption of the norms, values, beliefs, skills, and behaviors that correspond to that track. In other words, tracks can turn into a type of self-fulfilling prophecy: you may start out at the same level as someone in a higher track, but by the time you have completed the lower track you will have become like the other students in your track.
To reduce confounding variables and ensure notable test effects, Rosenbaum selected a homogeneous, white, working class public school with five different, highly stratified classes. Rosenbaum then compared IQ scores for individuals in the different tracks at two time points. As it turns out, tracking does have a significant effect on IQ. People in lower tracks can actually see a decline in IQ compared to a possible increase among those in the upper track. In other words, tracks socialize their students into their corresponding roles.
Social identity is a theory developed by Henri Tajfel and John Turner to understand the psychological basis of intergroup discrimination. As developed by Tajfel, Social Identity Theory is a diffuse but interrelated group of social psychological theories concerned with when and why individuals identify with, and behave as part of, social groups, adopting shared attitudes to outsiders. It is also concerned with what difference it makes when encounters between individuals are perceived as encounters between group members. Social Identity Theory is thus concerned both with the psychological and sociological aspects of group behavior. According to Tajfel and Turner, social identities are composed of three elements.
In sociology we distinguish between two types of groups based upon their characteristics. A Primary group is typically a small social group whose members share close, personal, enduring relationships. These groups are marked by concern for one another, shared activities and culture, and long periods of time spent together. The goal of primary groups is actually the relationships themselves rather than achieving some other purpose. Families and close friends are examples of primary groups.
Secondary groups are large groups whose relationships are impersonal and goal-oriented. Some secondary groups may last for many years, though most are short term. Such groups also begin and end with very little significance in the lives of the people involved. People in a secondary group interact on a less personal level than in a primary group. Rather than having as the goal the maintenance and development of the relationships themselves, these groups generally come together to accomplish a specific purpose. Since secondary groups are established to perform functions, people’s roles are more interchangeable. Examples of secondary groups include: classmates in a college course, athletic teams, and co-workers.
The distinction between primary and secondary groups was originally proposed by Charles Horton Cooley. He labeled groups as "primary" because people often experience such groups early in their life and such groups play an important role in the development of personal identity. Secondary groups generally develop later in life and are much less likely to be influential on one's identity.
The primary reason sociologists study groups is because of their power over individuals. A large body of research suggests that group membership strongly shapes the everyday behavior of individuals. Following are some of the many ways groups shape individual level behavior.
Diffusion of Responsibility
False Consensus and Illusory Superiority
A social network is a social structure between actors, either individuals or organizations. It indicates the ways in which they are connected through various social familiarities ranging from casual acquaintance to close familial bonds. The study of social networks is called both social network analysis and social network theory. Research in a number of academic fields has demonstrated that social networks operate on many levels, from families up to the level of nations, and play a critical role in determining the way problems are solved, organizations are run, and the degree to which individuals succeed in achieving their goals.
Social network theory views social relationships in terms of nodes and ties. Nodes are the individual actors within the networks, and ties are the relationships between the actors. There can be many kinds of ties between the nodes. In its most simple form, a social network is a map of all of the relevant ties between the nodes being studied. The network can also be used to determine the social capital of individual actors. These concepts are often displayed in a social network diagram, where nodes are the points and ties are the lines.
The shape of the social network helps determine a network's usefulness to its individuals. Smaller, tighter networks can be less useful to their members than networks with lots of loose connections (weak ties) to individuals outside the main network. More "open" networks, with many weak ties and social connections, are more likely to introduce new ideas and opportunities to their members than closed networks with many redundant ties. In other words, a group of friends who only do things with each other already share the same knowledge and opportunities. A group of individuals with connections to other social worlds is likely to have access to a wider range of information. It is better for individual success to have connections to a variety of networks rather than many connections within a single network. Similarly, individuals can exercise influence or act as brokers within their social networks by bridging two networks that are not directly linked (called filling social holes).
The power of social network theory stems from its difference from traditional sociological studies, which assume that it is the attributes of individual actors - whether they are friendly or unfriendly, smart or dumb, etc. - that matter. Social network theory produces an alternate view, where the attributes of individuals are less important than their relationships and ties with other actors within the network. This approach has turned out to be useful for explaining many real-world phenomena, but leaves less room for individual agency, the ability for individuals to influence their success, so much of it rests within the structure of their network. For instance, social networks have been used to examine how companies interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different companies. These networks provide ways for companies to gather information, deter competition, and even collude in setting prices or policies. Power within organizations has also been found to be tied to social networks. Powerful people in organizations often derive their power from their degree of connectedness within the organization (i.e., the degree to which an individual within a network is at the center of many relationships) rather than from job title or statuses. Social networks also play a key role in hiring, in business success for firms, and in job performance.
The so-called rule of 150 states that the size of a genuine social network is limited to about 150 members (sometimes called the Dunbar Number). The rule arises from cross-cultural studies in sociology and especially anthropology of the maximum size of a village (in modern parlance an ecovillage). It is theorized in evolutionary psychology that the number may be some kind of limit of average human ability to recognize members and track emotional facts about all members of a group. However, it may be due to economics and the need to track "free riders", as larger groups tend to be easier for cheats and liars to prosper in.
The small world phenomenon is the hypothesis that the chain of social acquaintances required to connect one arbitrary person to another arbitrary person anywhere in the world is generally short. The concept gave rise to the famous phrase six degrees of separation after a 1967 small world experiment by psychologist Stanley Milgram which found that two random US citizens were connected by an average of six acquaintances. Current internet experiments continue to explore this phenomenon, including the Ohio State Electronic Small World Project and Columbia's Small World Project. As of 2005, these experiments confirm that about five to seven degrees of separation are sufficient for connecting any two people through the internet.
Sociologists are interested in social networks because of their influence on and importance for the individual. Social networks are the basic tools used by individuals to meet other people, to recreate, and to find social support. Recent research suggests that the social networks of Americans are shrinking and more and more people have no close confidants or people with whom they can share their most intimate thoughts. In 1985, the mean network size of individuals in the U.S. was 2.94 people. Networks declined by almost an entire confidant by 2004, to 2.08 people. Almost half, 46.3% of Americans, say they have only one or no confidants with whom they can discuss important matters. The most frequently occurring response to the question of how many confidants one has was zero in 2004. The decline in confidants has been most notable among non-kin networks, putting greater emphasis on kin and spouses as social confidants. Most social confidants are similar in demographic characteristics to the person doing the sharing. The implications of these findings are potentially disturbing for American society as people have smaller social support networks, which are important for both social but also health reasons.
We categorize objects in order to understand them, in a very similar way we categorize people (including ourselves) in order to understand the social environment. We use social categories like black, white, Australian, Christian, Muslim, student, and busdriver because they are useful. If we can assign people to a category then that tells us things about those people. Without an understanding of people's groups and categories, we would have a very difficult time functioning in society. Similarly, we find out things about ourselves by knowing what categories we belong to. We define appropriate behaviour by reference to the norms of groups we belong to, but you can only do this if you can tell who belongs to your group.
We identify with groups that we perceive ourselves to belong to. Identification carries two meanings. Part of who we are is made up of our group memberships. That is, sometimes we think of ourselves as "us" vs. "them" or "we" vs. "they", and at other times we think of ourselves as "I" vs. "he or she" or "me" vs. "him or her". In other words, sometimes we think of ourselves as group members and at other times we think of ourselves as unique individuals. This varies situationally, so that we can be more or less a group member, depending upon the circumstances. What is crucial for our purposes is that thinking of yourself as a group member and thinking of yourself as a unique individual are both parts of your self-concept. The first is referred to as social identity, the latter is referred to as personal identity. In social identity theory, group membership is not something foreign which is tacked onto the person, it is a real, true and vital part of the person. Our groups make up part of who we are.
The other meaning implied by the concept of identity is the idea that we are, in some sense, the same, or identical to other people. This should not be misinterpreted, when we say that we are the same, we mean that for some purposes we treat members of our groups as being similar to ourselves in some relevant way. To take the most extreme example, in some violent conflict such as a war, the members of the opposite group - the outgroup - are treated as identical and completely different to the those people in your group - the ingroup - which is made up of distinct individuals. Thinking about individuals in one's outgroup in such a fashion allows the individual to believe that the enemy is deserving of death by dehumanizing them (more on this below). Treating people this way allows us to justify otherwise unjustifiable behavior.
A positive self-concept is a part of normal psychological functioning. There is pretty good evidence that to deal effectively with the world we need to feel good about ourselves. The idea of social comparison is that in order to evaluate ourselves we compare ourselves with similar others. We often gain self-esteem by comparing ourselves with others in our group, particularly if we can claim membership in a prestigious group. The prestige of a group is also often created through comparisons that positively reflect on the group. In other words, people in groups choose to compare their groups with other groups in ways that reflect positively on themselves. In fact, people are motivated to see their own group as relatively better than similar (but inferior) groups (i.e., positive distinctiveness). Inversely, people in a group may minimize differences between their group and another, slightly more prestigious group so one's own group tends to be seen more favorably (i.e., negative distinctiveness).
Groups choose dimensions for comparison in order to maximize the positivity of their own group. Groups which perceive themselves to be of high status on particular dimensions will choose those as the basis of comparison. Groups of low status will minimize differences on those dimensions or choose new dimensions. For example, people from some Middle Eastern Islamic countries might regard their country as inferior to the West in terms of economic and technological advancement but might regard their way of life as being morally superior.
Intriguingly, the notion that inferior or "underdog" groups are hyper-motivated to succeed against superior groups turns out not to be true, generally. Members of a group or team will actually work harder when they are competing against a lower ranked group than when they are competing against a higher-ranked group. This makes sense when framed as a status issue: the superior group has more to lose if it is defeated by an inferior team while the inferior team, if it loses, has not lost anything but rather has affirmed the existing social order. Thus, members of higher status groups work harder when competing against lower status groups.
If you've ever done something in a group that you would not do if you were alone, it's likely that you experienced conformity; your attitudes, beliefs, and/or behaviors were influenced by other people. One of the most well-known illustrations of conformity is Solomon Asch's conformity experiments. The experiment was set up as follows:
Research participants would enter a room and sit at a table with several other people (who were confederates, meaning they were actually helping with the research). The participant and confederates would be shown one card that had a reference line and another card that had three comparison lines. Subjects were required to select the comparison line that corresponded in length to the reference line.
This seems like a relatively straightforward task. However, choosing the correct line becomes much more difficult when the confederates choose the wrong answer. Why? Because of our tendency to conform to those around us. Thus, when Asch has the confederates choose the wrong answer, participants also chose the wrong line 37% of the time. How do you think you would respond in such a situation?
Jason Chein and Laurence Steinberg have illustrated why people in groups are more likely to take risks (one form of conforming) using functional magnetic resonance imaging: the presence of peers heightens the benefits that result from risky decisions. Participants in their research remained aware of the risks, but their brains exhibited heightened sensitivity to the benefits of succeeding. Thus, the presence of peers can facilitate risky behavior as it makes the reward more meaningful.
Social facilitation is the tendency for people to be aroused into better performance on simple tasks (or tasks at which they are expert or that have become autonomous) when under the eye of others, rather than while they are alone (i.e., the audience effect), or when competing against another (i.e., the coactor effect). Complex tasks (or tasks at which people are not skilled), however, are often performed in an inferior manner in such situations. This effect has been demonstrated in a variety of species. In humans, it is strongest among those who are most concerned about the opinions of others, and when the individual is being watched by someone he or she does not know, or cannot see well.
Social loafing refers to the phenomenon that can occur when people in a group make less of an effort to achieve a goal than they would working alone. As a result of social loafing, groups can sometimes generate less total output than the combined performance of their members working as individuals. Social loafing results from a lack of motivation when working in a group because individuals do not believe their specific contribution will be evaluated. As a result, they do not put in as much effort as they otherwise would. Social loafing is pervasive, but it does not occur when group members feel that their task or the group itself is important.
Deindividuation refers to the phenomenon of relinquishing one's sense of identity, self-awareness, or evaluation apprehension. This can happen as a result of becoming part of a group that fosters obedience to group norms rather than an individual's norms, such as an army or mob. Once this happens, individuals no longer think about themselves before they act and may, in fact, be unaware of their own actions.
Deindividuation can have quite destructive effects, like increasing the odds someone will commit a crime, engaging in violence, or even over-enforce the law, such as police in riot situations.
Group polarization refers to the finding that after participating in a discussion group, members tend to advocate more extreme positions and/or call for riskier courses of action than individuals who did not participate in any such discussion.
Group polarization results from two primary mechanisms: social comparison and informational influence. Social comparison refers to the drive of individuals to appear socially desirable. Informational social influence occurs when a person is in a situation where he or she is unsure of the correct way to behave. In such situations, that person will often look to others for cues concerning the correct behavior. When "we conform because we believe that other's interpretation of an ambiguous situation is more accurate than ours and will help us choose an appropriate course of action," it is informational social influence.
Group polarization has been used to explain the decision-making of juries, particularly when considering punitive damages in a civil trial. Studies have shown that after deliberating together, mock jury members often decided on punitive damage awards that were larger or smaller than the amount any individual juror had favored prior to deliberation. The studies indicated that when the jurors favored a relatively low award, discussion would lead to an even more lenient result, while if the jury was inclined to impose a stiff penalty, discussion would make it even harsher.
Diffusion of responsibility (also called the bystander effect) is a social phenomenon which tends to occur in groups of people above a certain critical size when responsibility is not explicitly assigned. A common example would be observing a minor fender-bender on a busy freeway. Most people, when they observe something like that, do not stop and do not call the police, assuming someone else will do so. This phenomenon rarely ever occurs in small groups. In tests involving groups of three or fewer, everyone in the group took action as opposed to groups of over ten where in almost every test, no one took action.
The false consensus effect is the tendency for people to project their way of thinking onto other people. In other words, people often assume that everyone else thinks the same way they do. This belief is unsubstantiated by statistical data, leading to the perception of a consensus that does not exist. This logical fallacy involves a group or individual assuming that their own opinions, beliefs and predilections are more prevalent amongst the general public than they really are. This bias is commonly present in a group setting where one thinks the collective opinion of their own group matches that of the larger population. Since the members of a group reach a consensus and rarely encounter those who dispute it, they tend to believe that everybody thinks the same way. This is done to justify one's own beliefs.
Illusory superiority is a cognitive bias in which people overestimate the degree to which they possess desirable qualities, relative to others, or underestimate their negative qualities relative to others. Such over- and under-estimations serve to bolster peoples' self-esteem. People who succumb to the illusory superiority bias have inflated views of their own characteristics. Some surveys have found all, or nearly all, the participants rate themselves as above the group average. The strongest effect has been found when people rate themselves on abilities at which they are totally incompetent. These subjects have the greatest disparity between their actual performance (at the low end of the distribution) and their self-rating (placing themselves above average). The effect has been found when people compare themselves to others on many different abilities and personality traits:
Groupthink is a term coined by psychologist Irving Janis to describe a process by which a group can make bad or irrational decisions. In a groupthink situation, each member of the group attempts to conform his or her opinions to what they believe to be the consensus of the group. In a general sense this seems to be a very rationalistic way to approach the situation. However this results in a situation in which the group ultimately agrees upon an action which each member might individually consider to be unwise (the risky shift).
Janis' original definition of the term was "a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members' strivings for unanimity override their motivation to realistically appraise alternative courses of action." The word groupthink was intended to be reminiscent of George Orwell's coinages (such as doublethink and duckspeak) from the fictional language Newspeak, which he portrayed in his novel Nineteen Eighty-Four.
Groupthink tends to occur on committees and in large organizations. Janis originally studied the Pearl Harbor bombing, the Vietnam War and the Bay of Pigs Invasion. Recently, in 2004, the US Senate Intelligence Committee's Report on the U.S. Intelligence Community's Prewar Intelligence Assessments on Iraq blamed groupthink for failures to correctly interpret intelligence relating to Iraq's weapons of mass destruction capabilities.
Janis provides descriptions of groupthink:
|antecedent conditions of groupthink||symptoms indicative of groupthink||symptoms of decisions affected by groupthink|
|Insulation of the group||Illusion of invulnerability||Incomplete survey of alternatives|
|High group cohesiveness||Unquestioned belief in the inherent morality of the group||Incomplete survey of objectives|
|Directive leadership||Collective rationalization of group's decisions||Failure to examine risks of preferred choice|
|Lack of norms requiring methodical procedures||Shared stereotypes of outgroup, particularly opponents||Failure to re-appraise initially rejected alternatives|
|Homogeneity of members' social background and ideology||Self-censorship; members withhold criticisms||Poor information search|
|High stress from external threats with low hope of a better solution than the one offered by the leader(s)||Illusion of unanimity (see false consensus effect)||Selective bias in processing information at hand (see also confirmation bias)|
|Direct pressure on dissenters to conform||Failure to work out contingency plans|
|Self-appointed "mindguards" protect the group from negative information|
One mechanism which management consultants recommend to avoid groupthink is to place responsibility and authority for a decision in the hands of a single person who can turn to others for advice. Others advise that a pre-selected individual take the role of disagreeing with any suggestion presented, thereby making other individuals more likely to present their own ideas and point out flaws in others' — and reducing the stigma associated with being the first to take negative stances (see Devil's Advocate).
Anonymous feedback via suggestion box or online chat has been found to be a useful remedy for groupthink — negative or dissenting views of proposals can be raised without any individual being identifiable by others as having lodged a critique. Thus the social capital of the group is preserved, as all members have plausible deniability that they raised a dissenting point.
Before proposing complex theories to explain sociological phenomena (e.g., World Systems Theory), especially at the macro and/or societal levels, sociologists should first turn to demographic indicators for possible explanations. Demographic analysis is a powerful tool that can explain a number of sociological phenomena.
For instance, in examining the elements that led to the first World War, most people turn to political and diplomatic conflicts but fail to consider the implications of expanding populations in the European countries involved. Expanding populations will result in increased competition for resources (i.e., food, land, access to trade routes and ports, etc.). Expanding populations may not be the primary cause of World War I, but it may have played a role in the increased hostilities leading up to the war. In this fashion, demographic indicators are often informative in explaining world events and should be turned to first as explanations.
The study of human populations has its roots, like sociology generally, in the societal changes that accompanied both the scientific and industrial revolutions. Some early mathematicians developed primitive forms of life tables, which are tables of life expectancies, for life insurance and actuarial purposes. Censuses, another demographic tool, were institued for primarily political purposes:
The development of demographic calculations started in the 18th century. Census taking, on the other hand, has a long history dating back close to 2,000 years among the Chinese and the Romans and even further back in history among some groups in the Middle East. Most modern censuses began in the late 18th century.
Demography relies on large data sets that are primarily derived from censuses and registration statistics (i.e., birth, death, marriage registrations). Large data sets over long periods of time (e.g., the U.S. census is conducted every 10 years) are required to develop trends in demographic indicators, like birth and death rates.
In many countries, particularly in developing nations, reliable demographic data are still difficult to obtain. In some locales this may be due to the association of census with taxation.
Fertility and Fecundity
The demographic transition is a model and theory describing the transition from high birth rates and death rates to low birth and death rates that occurs as part of the economic development of a country. In pre-industrial societies, population growth is relatively slow because both birth and death rates are high. In most post-industrial societies, birth and death rates are both low. The transition from high rates to low rates is referred to as the demographic transition. This understanding of societal changes is based on the work of Thompson, Blacker, and Notestein, who derived the model based on changes in demographics over the preceding two hundred years or so.
The beginning of the demographic transition in a society is indicated when death rates drop without a corresponding fall in birth rates (usually the result of improved sanitation and advances in healthcare). Countries in the second stage of the demographic transition (see diagram) experience a large increase in population. This is depicted in the diagram when death rates fall in stage two but birth rates do not fall until stage three. The red line begins its rapid upward growth in stage two and begins to level off at the end of stage three.
By the end of stage three, birth rates drop to fall in line with the lower death rates. While there are several theories that attempt to explain why this occurs (e.g., Becker and Caldwell, who view children as economic commodities), why birth rates decline in post-industrial societies is still being evaluated. Many developed countries now have a population that is static or, in some cases, shrinking.
As with all models, this is an idealized, composite picture of population change in these countries. The model is a generalization that applies to these countries as a group and may not accurately describe all individual cases. Whether or not it will accurately depict changes in developing societies today remains to be seen. For more information on the demographic transition, see here.
Overpopulation indicates a scenario in which the population of a living species exceeds the carrying capacity of its ecological niche. Overpopulation is not a function of the number or density of the individuals, but rather the number of individuals compared to the resources they need to survive. In other words, it is a ratio: population over resources. If a given environment has a population of 10, but there is food and drinking water enough for only 9 people, then that environment is overpopulated, while if the population is 100 individuals but there are food and water enough for 200, then it is not overpopulated. Resources to be taken into account when estimating if an ecological niche is overpopulated include clean water, food, shelter, warmth, etc. In the case of human beings, there are others such as arable land and, for all but tribes with primitive lifestyles, lesser resources such as jobs, money, education, fuel, electricity, medicine, proper sewage and garbage management, and transportation.
Presently, every year the world's human population grows by approximately 80 million. About half the world lives in nations with sub-replacement fertility and population growth in those countries is due to immigration. The United Nations projects that the world human population will stabilize in 2075 at nine billion due to declining fertility rates. All the nations of East Asia, with the exceptions of Mongolia, the Philippines, and Laos, are below replacement level. Russia and Eastern Europe are dramatically below replacement fertility. Western Europe also is below replacement. In the Middle East Iran, Tunisia, Algeria, Turkey, and Lebanon are below replacement. Canada, Australia, and New Zealand are similar to Western Europe, while the United States is just barely below replacement with about 2.0 births per woman. All four of these nations still have growing populations due to high rates of immigration.
Much of the projected growth is expected to come from African countries where birth rates remain high. While birth rates in most countries have fallen since 1990, in some parts of Africa birth rates have actually increased and the average woman has more than five children, well above the replacement rate.
Early Projections of Overpopulation
Population as a function of food availability
Effects of overpopulation
Possible Solutions to Overpopulation
The likelihood of a given individual in the U.S. moving to another place in the U.S. in any given year has declined over the last 40 years. Only about 1 in 10 Americans have moved in the last year, which is about half the proportion that changed residences annually in the 1960s. The reduction in moves is attributable to aging populations (older people are less likely to move) and an increase in dual-career couples. Those who do move are generally driven by jobs.
Close to 37% of Americans have never moved from the community in which they were born. There are wide variations in native inhabitants, however: 76% of Texans were born in-state while only 14% of Nevadans were born in-state. Some states lose a large number of people who were born in the state as well, like Alaska, where only 28% of the people born in that state have remained there. Immigration is often a controversial topic, for a variety of reasons, though many have to do with competition between those already living in the destination location and those arriving in that location. One recent study finds that one type of competition between immigrants and non-immigrants may be overstated. Some people have suggested that natives' opportunities to attend college are negatively impacted through competition with immigrants. Neymotin (2009) finds that competition with immigrants does not harm the educational outcomes of U.S. natives and may in fact facilitate college attending.
Urbanization is the physical growth of urban areas as a result of global change. Urbanization is also defined by the United Nations as movement of people from rural to urban areas with population growth equating to urban migration. The United Nations projected that half of the world's population would live in urban areas at the end of 2008. Urbanization is closely linked to modernization, industrialization, and the sociological process of rationalization.
Changing forms of urbanization
Because demography is interested in changes in human populations, demographers focus on specific indicators of change. Two of the most important indicators are birth and death rates, which are also referred to as fertility (see also fecundity) and mortality. Additionally, demographers are interested in migration trends or the movement of people from one location to another. Some of the specific measures used to explore these elements of population change are discussed below.
Fertility, in demography, refers to the ability of females to produce healthy offspring in abundance.
Fecundity is the potential reproductive capacity of a female. Some of the more common demographic measures used in relation to fertility and/or fecundity include:
Another important demographic concept relating to fertility is replacement level. Replacement level fertility refers to the number of children that a woman (or monogamous couple) must have in order to replace the existing population. Sub-replacement fertility is a fertility rate that is not high enough to replace an existing population. Replacement level fertility is generally set at 2.1 children in a woman's lifetime (this number varies by geographic region given different mortality rates). Sub-replacement fertility is below approximately 2.1 children in a woman's life time. The reason the number is set to 2.1 children per woman is because two children are needed to replace the parents and an additional one-tenth of a child is needed to make up for the mortality of children and women who do not reach the end of their reproductive years. Of course, women don't have one-tenth of a child; this results from statistical averaging between women who have more than two children and those who have two or fewer children.
The chart below illustrates trends in childbearing by region of the world. Fertility rates dropped earlier in the more developed regions of the world, followed by Asia and Latin America. Fertility rates are just starting to decline in Africa.
The chart below highlights the varied fertility rates of specific countries as some have very low fertility rates, many have moderate rates, and some have very high rates.
The following chart illustrates the relationship between contraceptive use and the total fertility rate by regions of the world. Increased contraceptive use is associated with lower numbers of children per woman.
One of the strongest predictors of fertility rates is women's educational attainment. Almost universally, higher levels of educational attainment result in lower fertility rates. It is not, however, education itself that causes declines in fertility but rather its association with other factors that reduce fertility: women with higher levels of education delay marriage, have improved labor market opportunities, are more likely to use contraception during intercourse, and are less likely to adopt traditional childbearing roles.
Fertility rates are also closely related to a country's level of development, which influences other factors. For instance, women who have kids in developed countries have increased opportunity costs, meaning they will make less money because of time spent outside the workforce raising kids. This is true in developed countries because women are more likely to be highly skilled and well-paid (relative to women in developing countries). Additionally, delayed childbearing, probability of a child reaching adulthood, norms about ideal family sizes, and pervasiveness of contraceptives will all reduce fertility rates. But one of the biggest factors is the cost of children. In undeveloped and developing countries, children are often an economic asset to parents as they serve as cheap labor on the farm; they don't require pay, just food and shelter. That is not the case in developed countries, where very few people work in agriculture (roughly 2% in the US). Instead, children are an economic liability, meaning they cost money while not generating money for the parents. The cost of raising a child from birth to 17 in a middle-income home in 2005 was $191,000. That cost goes up if parents pay for a child's college education, which averages between $12,000 and $30,000. Thus, the cost of raising children in developed countries reduces fertility rates in those countries.
Mortality refers to the finite nature of humanity: people die. Mortality in demography is interested in the number of deaths in a given time or place or the proportion of deaths in relation to a population. Some of the more common demographic measures of mortality include:
Note that the crude death rate as defined above and applied to a whole population can give a misleading impression. For example, the number of deaths per 1000 people can be higher for developed nations than in less-developed countries, despite standards of health being better in developed countries. This is because developed countries have relatively more older people, who are more likely to die in a given year, so that the overall mortality rate can be higher even if the mortality rate at any given age is lower. A more complete picture of mortality is given by a life table which summarizes mortality separately at each age.
This chart depicts infant mortality by region of the world. The less developed regions of the world have higher infant mortality rates than the more developed regions.
This chart depicts life expectancy by region of the world. Similar to infant mortality, life expectancies are higher in more developed regions of the world.
According to recent research, one of the best predictors of longevity (i.e., a long life) is education, even when other factors are controlled: the more educated you are, the longer you can expect to live. A few additional years of schooling can add several additional years to your life and vastly improve your health in old age. The mechanism through which this works is not the schooling itself, but schooling's influence on other health-related behaviors. The more education someone has, the lower his/her likelihood of smoking and engaging in unhealthy and high risk behaviors. Education also increases the probability of people engaging in healthy behaviors, like frequently exercising.
Other factors associated with greater longevity include:
Early in the 19th century, Thomas Malthus argued in An Essay on the Principle of Population that, if left unrestricted, human populations would continue to grow until they would become too large to be supported by the food grown on available agricultural land. He proposed that, while resources tend to grow arithmetically, population grows exponentially. At that point, the population would be restrained through mass famine and starvation. Malthus argued for population control, through moral restraint, to avoid this happening.
The alternative to moral restraint, according to Malthus, is biological and natural population limitation. As the population exceeds the amount of available resources the population decreases through famine, disease, or war, since the lack of resources causes mortality to increase. This process keeps the population in check and ensures it does not exceed the amount of resources.
Over the two hundred years following Malthus's projections, famine has overtaken numerous individual regions. Proponents of this theory, Neo-Malthusians state that these famines were examples of Malthusian catastrophes. On a global scale, however, food production has grown faster than population. It has often been argued that future pressures on food production, combined with threats to other aspects of the earth's habitat such as global warming, make overpopulation a still more serious threat in the future.
There are some scholars who argue that human populations predictably grow and shrink according to their available food supply - populations grow when there is an abundance of food and shrink in times of scarcity. This idea is only slightly less problematic than the idea that human population growth is not guided by food production, as it suggestions that that every time food production is intensified to feed a growing population, the population responds by increasing even more. Some human populations throughout history support this theory, as consistent population growth began with the agricultural revolution, when food supplies consistently increased.
Critics of this idea point out that birth rates are voluntarily the lowest in developed nations, which also have the highest access to food. In fact, the population is decreasing in some countries with abundant food supply. Thus, human populations do not always grow to match the available food supply. Critics cite other factors that contribute to declining birth rates in developed nations, including: increased access to contraception, later ages of marriage, the growing desire of many women in such settings to seek careers outside of childrearing and domestic work, and the decreased economic 'utility' of children in industrialized settings. The latter explanation stems from the fact that children perform a great deal of work in small-scale agricultural societies, and work less in industrial ones; this interpretation may seem callous, but it has been cited to explain the drop-off in birthrates worldwide in all industrializing regions.
For some, the concept that human populations behave in the same way as do populations of bears and fish is hard to believe; for others it indicates a feasible solution to population issues. In either case, since populations are tied to the food they consume, discussions of populations should not take place without considering the role played by food supply. There is a substantial body of research that has considered the ability of the planet to provide sufficient food for the world's growing population. This research suggests that the planet can potentially provide sufficient food for the projected peak population of humans of 9 billion people, but only if agriculture is carefully managed. Factors that must be considered include: genetically modified crops, employing agricultural tools in correct, context-specific ways, aquaculture, and simultaneously working to limit harm to the environment.
In any case, many proponents of population control have averred that famine is far from being the only problem attendant to overpopulation. These critics point out ultimate shortages of energy sources and other natural resources, as well as the importance of serious communicable diseases in dense populations and war over scarce resources such as land area. A shortage of arable land (where food crops will grow) is also a problem.
The world's current agricultural production, if it were distributed evenly, would be sufficient to feed everyone living on the Earth today. However, many critics hold that, in the absence of other measures, simply feeding the world's population well would only make matters worse, natural growth will cause the population to grow to unsustainable levels, and will directly result in famines and deforestation and indirectly in pandemic disease and war.
Some of the other characteristics of overpopulation include:
Another point of view on population growth and how it effects the standard of living is that of Virginia Abernethy. In Population Politics she shows evidence that declining fertility following industrialization only holds true in nations where women enjoy a relatively high status. In strongly patriarchal nations, where women enjoy few rights, a higher standard of living tends to result in population growth. Abernathy argues that foreign aid to poor countries must include significant components designed to improve the education, human rights, political rights, political power, and economic status and power of women.
Some approach overpopulation with a survival of the fittest, laissez-faire attitude, arguing that if the Earth's ecosystem becomes overtaxed, it will naturally regulate itself. In this mode of thought, disease or starvation are "natural" means of lessening population. Objections to this argument are:
Others argue that economic development is the best way to reduce population growth as economic development can spur demographic transitions that seem to naturally lead to reductions in fertility rates.
In either case, it is often held that the most productive approach is to provide a combination of help targeted towards population control and self-sufficiency. One of the most important measures proposed for this effort is the empowerment of women educationally, economically, politically, and in the family. The value of this philosophy has been substantially borne out in cases where great strides have been taken toward this goal. Where women's status has dramatically improved, there has generally been a drastic reduction in the birthrate to more sustainable levels. Other measures include effective family planning programs, local renewable energy systems, sustainable agriculture methods and supplies, reforestation, and measures to protect the local environment.
David Pimentel, a Cornell University professor of ecology and agricultural sciences, sees several possible scenarios for the 22nd century:
Spreading awareness of the issues is an important first step in addressing it.
Once countries pass through the demographic transition, some experience fertility rate decreases so substantial that they fall well below replacement level and their populations begin to shrink (as has Russia's in recent years, though emigration has also played a role in Russia's population decline). A new fear for many governments, particularly those in countries with very low fertility rates, is that a declining population will reduce the GDP and economic growth of the country, as population growth is often a driving force of economic expansion. To combat extremely low fertility rates, some of these governments have introduced pro-family policies, that include things like payments to parents for having children and extensive parental leave for parents. Such policies may reverse the low fertility rates, but they also seem to be shortsighted in light of the concerns associated with overall world population growth.
As more and more people leave villages and farms to live in cities, urban growth results. The rapid growth of cities like Chicago in the late 19th century and Mumbai a century later can be attributed largely to rural-urban migration and the demographic transition. This kind of growth is especially commonplace in developing countries.
The rapid urbanisation of the world’s population over the twentieth century is described in the 2005 Revision of the UN World Urbanization Prospects report. The global proportion of urban population rose dramatically from 13% (220 million) in 1900, to 29% (732 million) in 1950, to 49% (3.2 billion) in 2005. The same report projected that the figure is likely to rise to 60% (4.9 billion) by 2030. Urbanization rates vary between countries. The United States and United Kingdom have a far higher urbanization level than China, India, Swaziland or Niger, but a far slower annual urbanization rate, since much less of the population is living in a rural area.
People move into cities to seek economic opportunities. A major contributing factor is known as "rural flight". In rural areas, often on small family farms, it is difficult to improve one's standard of living beyond basic sustenance. Farm living is dependent on unpredictable environmental conditions, and in times of drought, flood or pestilence, survival becomes extremely problematic. In modern times, industrialization of agriculture has negatively affected the economy of small and middle-sized farms and strongly reduced the size of the rural labour market. Cities, in contrast, are known to be places where money, services and wealth are centralized. Cities are where fortunes are made and where social mobility is possible. Businesses, which generate jobs and capital, are usually located in urban areas. Whether the source is trade or tourism, it is also through the cities that foreign money flows into a country. Thus, as with immigration generally, there are factors that push people out of rural areas and pull them into urban areas.
There are also better basic services as well as other specialist services in urban areas that aren't found in rural areas. There are more job opportunities and a greater variety of jobs. Health is another major factor. People, especially the elderly are often forced to move to cities where there are doctors and hospitals that can address their health needs. Other factors include a greater variety of entertainment (e.g., restaurants, movie theaters, theme parks, etc.) and better quality of education in the form of universities. Due to their high populations, urban areas can also have much more diverse social communities allowing others to find people like them when they might not be able to in rural areas. These conditions are heightened during times of change from a pre-industrial society to an industrial one.
As cities develop, effects can include a dramatic increase in costs, often pricing the local working class out of the real estate market. For example, Eric Hobsbawm's book The age of the revolution: 1789–1848 (published 1962 and 2005) chapter 11, stated "Urban development in our period [1789–1848] was a gigantic process of class segregation, which pushed the new labouring poor into great morasses of misery outside the centres of government and business and the newly specialised residential areas of the bourgeoisie. The almost universal European division into a 'good' west end and a 'poor' east end of large cities developed in this period." This is likely due the prevailing south-west wind which carries coal smoke and other airborne pollutants downwind, making the western edges of towns preferable to the eastern ones. Similar problems now affect the developing world; rising inequality results from rapid urbanization. The drive for growth and efficiency can lead to less equitable urban development.
Urbanization is often viewed as a negative trend, but can in fact, be perceived simply as a natural occurrence from individual and corporate efforts to reduce expense in commuting and transportation while improving opportunities for jobs, education, housing, and transportation. Living in cities permits individuals and families to take advantage of the opportunities of proximity, diversity, and marketplace competition.
One environmental concern associated with urbanization is the urban heat island. The urban heat island is formed when industrial and urban areas are developed and heat becomes more abundant. In rural areas, a large part of the incoming solar energy is used to evaporate water from vegetation and soil. In cities, where less vegetation and exposed soil exists, the majority of the sun’s energy is absorbed by urban structures and asphalt. Hence, during warm daylight hours, less evaporative cooling in cities allows surface temperatures to rise higher than in rural areas. Additional city heat is given off by vehicles and factories, as well as by industrial and domestic heating and cooling units. This effect causes the city to become 2 to 10o F (1 to 6o C) warmer than surrounding landscapes. Impacts also include reducing soil moisture and intensification of carbon dioxide emissions.
The effects of urbanization may be an overall positive for the environment. Birth rates of new urban dwellers fall immediately to the replacement rate (2.1), and keep falling. This can prevent overpopulation (see discussion below). Additionally, it puts a stop to destructive subsistence farming techniques, like slash and burn agriculture. Finally, it minimizes land use by humans, leaving more for nature.
Different forms of urbanization can be classified depending on the style of architecture and planning methods as well as historic growth of areas. In cities of the developed world urbanization traditionally exhibited a concentration of human activities and settlements around the downtown area. Recent developments, such as inner-city redevelopment schemes, mean that new arrivals in cities no longer necessarily settle in the centre. In some developed regions, the reverse effect, originally called counter urbanisation has occurred, with cities losing population to rural areas, and is particularly common for richer families. This has been possible because of improved communications and means of transportation, and has been caused by factors such as the fear of crime and poor urban environments. Later termed "white flight", the effect is not restricted to cities with a high ethnic minority population. When the residential area shifts outward, this is called suburbanization. Some research suggests that suburbanization has gone so far to form new points of concentration outside the downtown both in developed and developing countries such as India.
Urbanization can be planned or organic. Planned urbanization, (e.g., planned communities), is based on an advanced plan, which can be prepared for military, aesthetic, economic or urban design reasons. Organic urbanization is not organized and happens haphazardly. Landscape planners are responsible for landscape infrastructure (e.g., public parks, sustainable urban drainage systems, greenways, etc.) which can be planned before urbanization takes place, or afterward to revitalize an area and create greater livability within a region. Planned urbanization and development is the aim of the American Institute of Planners.
Human life is often divided into various age spans, like the following:
These divisions are somewhat arbitrary, but generally capture periods of life that reflect a certain degree of similarity. In many countries, such as Sweden, adulthood legally begins at the age of eighteen. This is a major age milestone that is marked by significantly different attitudes toward the person who undergoes the transition.
While aging, itself, is a biological process, what it means to be "old" or "young" and at what ages such distinctions are made varies by culture. In Western societies, where youth is highly valued, people are considered "old" at much younger ages than in Eastern societies where age is often seen to beget wisdom. This emphasis on youth translates into considerable expenditures on makeup, cosmetics, and surgeries to hide signs of aging, particularly among women, but also among men. Ironically, among adolescents, just the opposite approach is taken, as adolescents often try to appear "older", though obviously not too much older.
The labels of "old" and "young" also vary by life expectancy. In societies where lifespans are relatively short (e.g., Chad), one could be considered "old" or "middle-aged" by her mid-twenties, whereas in countries with longer lifespans (e.g., Japan), mid-twenties is still considered young-adulthood.
The activities that are expected of one at different ages is also socially constructed and relative to culture. For instance, retirement only became a "universal" American institution in the post-World War I era, as the growth of Social Security and private pensions dramatically expanded the safety net available to aging workers who were leaving the labor market. Likewise, the idea of childhood being an age of innocence when children should be kept from adult worries and spend their time pursuing education and recreating is only widely held in highly developed countries and is a relatively recent invention, following the industrial revolution and the introduction of child-labor laws.
Ageism is prejudice on the grounds of age. While it can be targeted toward individuals of any age, two groups that are often targeted are the young and the elderly.
Treatment of the Young
Treatment of the Elderly
Prior to the introduction of Social Security in the U.S. and other programs for the needy, the elderly were the poorest age group in the U.S. Social Security (technically Old-Age, Survivors, and Disability Insurance or OASDI) is an income redistribution program that takes taxes from those working and distributes it to those who cannot work or who are elderly enough to be considered past the age at which they can retire. With the introduction of Social Security, the poverty rates of the elderly in the U.S. dropped dramatically, as illustrated in the figure below.
Despite the success of the Social Security program in reducing poverty among the elderly, one unforeseen consequence has been the increasing poverty of people under 18. Conflict theory provides a clear theoretical argument to explain this: Since there are limited resources - in this case limited tax revenue - if those resources go to one group, they must necessarily come from another group. Thus, if the elderly see an increase in their total share of tax revenue, it is likely that some other age group will see a decrease in its total share of tax revenue. Thus, individuals under 18 have seen some programs cut that would have otherwise helped maintain their lower levels of poverty. Even so, poverty rates across all three age groups depicted in the figure above have declined from what they were prior to the introduction of tax redistribution policies like Social Security.
While generally considered a highly successful program at reducing poverty among the elderly, Social Security is currently experiencing problems. The chart below illustrates the problems with Social Security payments for retirees in the U.S. Currently, more money is received from Social Security taxes than is distributed to retirees. However, that will change if Social Security is not modified by 2017. At that point, less will come in than will go out. For about 20 years following that, the Social Security Trust Fund, which is the money that was collected when there was a surplus in tax revenue, will make up the difference in payments. But by 2037, the Trust Fund reserves will be exhausted and payments to beneficiaries will drop to about 75% of what they would normally receive.
This is resulting from the lower ratio of employed workers to benefit recipients, a ratio that continues to decrease as the U.S. population grows more elderly (as shown in the figure below).
While the elderly have seen substantial improvements in their economic situation in recent decades, those improvements have not equally affected men and women. Women, whether working or not, are more likely to fall below the federal poverty line than are men, as depicted in the figure below.
This is of special concern considering women live longer than men, as illustrated in the next figure.
Why women live longer than men is not perfectly understood. Several factors may contribute to this. For instance, men do engage in riskier behaviors than women, reducing their life expectancy. Men are also more "successful" when attempting suicide, which increases the rate of death among men of suicide. Another factor that may contribute to the greater life expectancy of women is the different types of jobs men and women tend to have during their lifetimes. Other biological factors likely play a role, including greater heart health among women, though how much they contribute to the greater longevity of women is not entirely clear. The combined effect of all these differences may or may not account for the longevity gap between men and women, but it is clear that women do live longer than men and that holds true around the world.
The geography of age in the US is quite intriguing. The map below illustrates that the elderly are not equally distributed throughout the U.S.
There are concentrations of the elderly in the Midwest and in the South, particularly in Florida. While the high concentration of the elderly in Florida may not come as much of a surprise to most Americans who are aware of the high rate at which people who retire move to Florida, the high concentration of the elderly in the Midwest may be more surprising. This higher concentration is not because the elderly are moving to the Midwest but rather because the young are moving out of the Midwest as they search for jobs. Thus, the two regions with the highest concentrations of the elderly in the US have high concentrations of elderly people for very different reasons.
The city of Pittsburgh offers an intriguing case study of the effects of an aging population on a city. As of 2008 more people are dying in Pittsburgh than are being born.
Add to this the fact that many young people are moving away from
Pittsburgh to find jobs, and you have the perfect recipe for both
population decline and an aging population. One result of this
demographic shift is that there is a greater demand for health care
provision. Health care has replaced steel as Pittsburgh's biggest
Another result of these trends is the decline in students attending
Pittsburgh schools. In the 1980s there were nearly 70,000 students in
the public school; by 2008 there were only about 30,000 and the number
is declining by about 1,000 every year.
In short, as populations in specific locations age, the entire social
structure must change to accommodate the new demographic, which supports
the notion of equilibrium in structural-functionalist theory.
Globally, most countries are seeing the average life expectancy of their populations increase. This translates into a greater percentage of the world's population falling above the age of 65, as illustrated in the figure below.
However, the rate at which the world's population is aging is not uniform across countries, and some countries have actually seen decreasing life expectancies, largely as a result of AIDS. The varied life expectancies and younger populations are illustrated in the map below, which depicts the percentage of each country's population that is over 65.
It is pretty clear from the map that more developed countries have much older populations and a greater percentage of their population is aged 65+. The least developed countries are also the youngest countries as life expectancies are substantially lower.
While aging is often associated with declining health, current research suggests there are some things people can do to remain healthy longer into old age. For instance, maintaining a positive attitude has been shown to be correlated with better health among the elderly. Older individuals with more positive attitudes and emotions engage in less risky behavior and have lower levels of stress, both of which are correlated with better health.
While most people are aware of the mistreatment of the elderly (see below), few people seem to realize that young people are often subjected to discrimination because of their age. Discrimination against young people is primarily in the area of behavioral restrictions, often by parents, but also in public places like malls and stores. Some stores have gone so far as to limit the hours young people can be in their stores.
While the above are clear examples of discrimination, there are other restrictions placed on young people based on the assumption that they are unable to make decisions for themselves. Examples of such restrictions include:
All of the above restrictions hinge upon the idea that young people lack the maturity required to make such important decisions. While this is likely true for some young people, there are also some young people who are mature enough to make these decisions. The above restrictions are tied to specific ages for legal reasons, but such restrictions may not always be followed, do vary substantially by region and culture, and may not always make sense. However, it is also the case that young people can lack the maturity to make important decisions. Legally, when young people make poor decisions, the defense of infancy is used in such cases to argue that such individuals are too immature to be held responsible for their decisions.
Another way in which the young are treated differently is in the marketing practices of corporations. Children and teenagers have little money of their own, but of the money they have most of it is available for buying consumer goods (their parents cover their basic living needs). Children can also be heavily influential on how their parents' spend their discretionary income. Manufacturers of consumer goods and providers of services (e.g., Disney) are aware of the buying power of young people and target them specifically. The elderly are also targeted as a consumer demographic, but the approaches are different.
Young people are also stereotyped as being both amusing, but at the same time potentially dangerous and disturbing. It is stereotypes like these that translate into the discrimination toward young people described above.
While discrimination toward the young is primarily behavioral restrictions, discrimination toward the elderly ranges from behavioral restrictions to the realm of physical abuse. Elder abuse is a serious problem in the U.S. There are nearly 2 million cases of elder abuse and self-neglect in the U.S. every year. Abuse refers to psychological/emotional abuse, physical abuse, sexual abuse, caregiver neglect or financial exploitation while self-neglect refers to behaviors that threaten the person's own health and safety.
Elderly individuals who are subjected to abuse have at least two times higher risk of premature death than do other seniors. And elders who suffer from self-neglect have an even higher risk (up to 5 times higher) of premature death than do elders who do not suffer from self-neglect. The higher risk of death associated with elder abuse effects both those who are physically and cognitively impaired and those who are more capable.
Additionally, the elderly are often stereotyped. One stereotype of the elderly is that they are inflexible in their attitudes and that they tend to only become more conservative with time. This is not the case - the elderly are quite adept at changing their views and often they become more tolerant as they age. Another stereotype of the elderly is that they are poorer drivers than younger people. This stereotype is also not backed by evidence, with some caveats. Up to about the age of 75, older drivers are actually safer than drivers of other ages. Beyond age 75 response times and visual acuity do begin to decline, leading to increases in accidents. Thus, many older drivers are actually much safer than the youngest drivers (under 18).
Aging does not result in similar outcomes for members of different races. There is evidence that black senior citizens are more likely to be abused - both physically and psychologically and suffer greater financial exploitation than do white senior citizens.
While the focus of this chapter is on sociological explanations of deviance, there are explanations from other disciplines as well. For instance, recent research in neurology and psychology finds that boys with conduct disorder have differences in their brain structure and that those differences exist during childhood and adolescence. These differences likely contribute to their deviant behavior.
Crime statistics are usually data collected by governments for the reporting of incidents of criminal activity. They are useful for a number of reasons, beyond simply giving an awareness of the extent of criminal activity. Presented below are statistics on criminal activity and the criminal justice system for both the U.S. and selected nations around the world (for comparisons). The statistics included in this section were chosen to provide a sampling of how crime statistics can be useful beyond simply reporting incidents of criminal behavior.
It is important to understand that crime statistics do not provide a perfect view of crime. Government statistics on crime only show data for crimes that have been reported to authorities. These crimes represent only a fraction of those crimes that have been acted upon by law enforcement, which in turn represents only a fraction of those crimes where people have made complaints to the police, which in turn represents only a fraction of the total crimes committed. However, it should also be noted that television presents an unrealistic picture of the frequency of crime, particularly violent crime. Heavy viewers of crime dramas on TV (e.g., CSI, Law & Order, etc.) estimate that there are 2 1/2 times as many real world deaths due to murder than do non-viewers. Thus, while crimes are under-reported, they do tend to receive disproportionate attention in the media, leading people to think that crime is more prevalent than it actually is.
Incarceration Rates and Populations
High Crime Neighborhoods
Social control refers to the various means used by a society to bring its members back into line with cultural norms. There are two general types of social control:
Some researchers have outlined some of the motivations underlying the formal social control system. These motivations include:
Video Games and Deviance
Robert K. Merton, in his discussion of deviance, proposed a typology of deviant behavior. A typology is a classification scheme designed to facilitate understanding. In this case, Merton was proposing a typology of deviance based upon two criteria: (1) a person's motivations or her adherence to cultural goals; (2) a person's belief in how to attain her goals. These two criteria are shown in the diagram below. According to Merton, there are five types of deviance based upon these criteria:
What makes Merton's typology so fascinating is that people can turn to deviance in the pursuit of widely accepted social values and goals. For instance, individuals in the U.S. who sell illegal drugs have rejected the culturally acceptable means of making money, but still share the widely accepted cultural value in the U.S. of making money. Thus, deviance can be the result of accepting one norm, but breaking another in order to pursue the first.
The structural-functionalist approach to deviance argues that deviant behavior plays an important role in society for several reasons. First, deviance helps distinguish between what is acceptable behavior, and what is not. In a sense deviance is required in order for people to know what they can and cannot do. It draws lines and demarcates boundaries. This is an important function as it affirms the cultural values and norms of a society for the members of that society.
In addition to clarifying the moral boundaries of society, deviant behavior can also promote social unity, but it does so at the expense of the deviant individuals, who are obviously excluded from the sense of unity derived from differentiating the non-deviant from the deviants.
Finally, and quite out of character for the structural-functionalist approach, deviance is actually seen as one means for society to change over time. Deviant behavior can imbalance societal equilibrium. In the process of returning societal equilibrium, society is often forced to change. Thus, deviant behavior serves several important functions in society.
A clear example of how deviance reflects power imbalances is in the reporting and tracking of crimes. White-collar crimes are typically committed by individuals in higher social classes. Examples of white-collar crimes include:
As of 2009, the FBI and the Bureau of Justice Statistics do not provide clear statistics on white-collar crime, like they do with other types of crime. Most of the statistics provided are estimates of losses resulting from white-collar crime, which include:
That such crimes are not tracked more clearly suggests that there is less of an emphasis placed on prosecuting white collar crime than there is on prosecuting other types of crime (property and violent crime) in the U.S. It may also be the case that it is difficult to collect such statistics, but that is also likely due to the fact that a system for tracking such crimes has not been put into place because such crimes are not seen as warranting the same amount of attention as exists for other types of crimes.
That white-collar crimes are less likely to be tracked, less likely to be reported, less likely to be prosecuted, and are more likely to be committed by people in higher social classes suggests that the way crimes are punished in the U.S. tends to favor the affluent while punitively punishing the less affluent. Additionally, men benefit more from white-collar crime than do women, as they are more likely to attempt these crimes when they are in more powerful positions, allowing them to reap greater rewards.
Labeling Theory refers to the idea that individuals become deviant when two things occur:
This approach to deviance recognizes its cultural relativity and is aware that deviance can result from power imbalances. But it takes the idea of deviance further by illustrating how a deviant identity develops through the application and adoption of labels. Labeling theory argues that people become deviant as a result of people forcing that identity upon them and then adopting the identity.
Labels are understood to be the names associated with identities or role-sets in society. Examples of more innocuous labels might include father or lover. Deviant labels refer to identities that are known for falling outside of cultural norms, like loner or punk.
There are two additional ideas related to the labeling theory approach to understanding deviance. First, once a deviant identity is adopted, it is often the case that the past behaviors of the now deviant individual are re-interpreted in light of the new identity. The process of re-casting one's past actions in light of a current identity is referred to as retrospective labeling. A very clear example of retrospective labeling can be seen in how the perpetrators of the Columbine High School massacre, Eric Harris and Dylan Klebold, were re-cast after the incident took place. Much of their behavior leading up to the school shootings has been re-interpreted in light of the deviant identity with which they were labeled as a result of the shootings.
Another important element of labeling theory involves the idea of stigma. Stigma refers to the situation of the individual who is disqualified from full social acceptance because of some mark of infamy or disgrace or a label that is often difficult to hide or disguise. Stigma extend the idea of labeling theory by illustrating how individual characteristics can be the basis for attaching labels that can be life-altering. A good example of a stigma that is now increasingly difficult to hide is the publishing of convicted sex offender identities and information on websites (see here for an example). The stigma is the past behavior - the sex offense - but this identity is relatively easily hidden as it impossible to pick a sex offender out of a crowd. By pushing the sex offender identity into public purview, sex offenders, regardless of current behavior, are stigmatized; they are stuck with a deviant identity that overwhelms any other identity they may have. In sum, labeling theory argues that the application of labels (role-sets) to individuals is an important element leading to deviant behavior.
One of the more interesting features of the U.S. is the extensive number of people who are currently in the correctional system. One explanation for this is the increasingly punitive approach of the criminal justice system. According to Western (2007), those who break laws in the U.S. today are twice as likely to be imprisoned as criminals a generation ago. While debated, the percentage of prison inmates who have been wrongly convicted of crimes is estimated to be somewhere between less than 1% all the way up to 9%, which could mean hundreds of thousands of prison inmates are actually innocent. The figure to the right breaks down the correctional system population by the status of individuals in the correctional system, including:
While the population of the United States is the third largest in the world (behind China and India), the percentage of the population that is in prison is the highest in the world, as illustrated by the map below.
This map illustrates that the U.S. has both a lot of people in prison
in sheer numbers but also as a percentage of the population. Comparing
incarceration rates by countries goes beyond just reporting incidents of
criminal activity (incidents of crime are not much higher in the U.S.
than elsewhere) by highlighting differences in the correctional systems
of countries. Countries differ in the restrictiveness of their laws and
prison sentences. Differences of these types are seen when comparing
incarceration rates and populations.
The U.S. has a relatively high recidivism rate (recidivism refers to the frequency of repeat offenses). More than half (some estimate around 67%) of prison inmates will be convicted on another charge within three years of having been released and return to prison. This statistic is revealing of the nature of the prison system in the U.S.: it is more interested in keeping people who commit crimes away from the rest of the population than it is in attempting to reform or resocialize individuals to make them productive members of society. One factor that contributes to the high recidivism rates is the social stigma that accompanies having been convicted of a felony in the US. van Olphen et al. (2009) found that individuals convicted of drug offenses were very likely to be reincarcerated, largely due to punitive punishments that accompanied their "convicted felon" stigma. For instance, they were often denied access to public housing and food stamps, which led many of the participants in the study to sell drugs to survive, leading to future arrests and convictions. The lack of help given to convicts released from prison increases the odds of those convicts returning to prison. Another factor that significantly increases the odds of convicts returning to prison is their return to their former neighborhoods. Convicts who move away from their old neighborhood are substantially less likely to commit another crime; distancing themselves from the social environment that encouraged their criminal activity in the first place reduces their odds of reincarceration. Finally, the stigma associated with spending time in prison leads to substantially worse physical and mental health for ex-cons, including higher rates of chronic illness, disability, psychiatric disorders, major depression, and anxiety.
A relatively recent innovation in criminal justice that has been shown to moderately reduce recidivism rates is "drug courts," or alternative sentencing systems that mandate treatment and therapy rather than jail time for drug offenses. Drug courts appear to reduce recidivism by somewhere between 8 and 10 percent. That drug courts reduce recidivism is not all that surprising consider there is an actual intent to modify behavior rather than simply removing individuals from society.
Another interesting characteristic of the U.S. is the amount of money that is spent on the correctional system. Policing the nations streets is the most expensive component of the correctional system, followed by housing prison inmates. The average annual cost for one federal prisoner exceeds $20,000. The judicial process is the least expensive, but the combined expenses of all three elements total over $200 billion annually (when you combine state, local, and federal expenditures).
Even though billions of dollars are spent on the criminal justice system every year in the U.S., the financial outlays actually account for only part of the cost of mass incarceration. Millions of additional jobs and even lives are lost as a result of the stigma that follows prison inmates when released (which also explains the high recidivism rate). Convicted felons are barred from working in certain industries, have limited access to educational opportunities, and have limited access to welfare and housing benefits from the government. All of these problems combine to concentrate former inmates in poor urban neighborhoods that have limited opportunities to move out of criminal circles. Reducing the consequences of felony convictions and providing occupational and drug counseling would go a long way toward alleviating the high recidivism rates.
Another way crime statistics can go beyond simply reporting incidents of criminal activity is in highlighting differences between different groups. One difference in criminal activity is seen in the number of violent crimes committed by gender (see chart). While the difference has narrowed in recent years, men are still more likely to commit violent crimes than are women.
Another telling crime statistic that is traditionally seen as highlighting power imbalances is the number of rapes in society. While the focus of this chapter is not on exploring the motivations behind rape, the number of rapes in the U.S. and internationally can be seen to reflect power imbalances between men and women as men are far more likely to rape women than vice versa. The figures below and to the right show that rape rates in the U.S. have declined in recent years and also compare rape rates from select countries around the world.
Regardless of one's views on the War on Drugs in the U.S., one thing is certain, the war disproportionately targets African Americans. Since the inception of the war on drugs in 1980, 31 million drug related arrests have been made. African Americans are no more likely to use drugs than are whites, but between 1980 and 2003, the arrest rates for African Americans for drug offenses has risen at three times the rate as it has for whites, 225% vs. 70%. The reason: drug use and trafficking in the inner-city as opposed to suburbs has been the focus of the war. Additionally, penalties for using drugs that are more often found among minorities have traditionally been harsher than for drugs used by whites. Crack and powder cocaine are very similar in effect, but possession of crack cocaine carries harsher penalties and is more likely to be used by blacks, who account for nearly 80% of crack convictions, than whites. These laws were enacted in 1986 and mandated minimum sentencing - 5 years for possessing five grams of crack; 10 years for 10 grams. But the thresholds for powdered cocaine were 100 times as high - 500 grams of powdered cocaine got you just 5 years. The discriminatory prosecution of African Americans for drug offenses is just one way in which the criminal justice system in the U.S. works against African American equality.
The criminal justice system in the U.S. has a significant impact on the life chances of racial and ethnic minorities, in particular, people of African descent. Serving time in prison has become a normative event for young, lower-class African-American males. The average African-American, male, high-school dropout born in the 1960s in the U.S. had a nearly 60% chance of serving time in prison by the end of the 1990s. (This probability drops precipitously for college-educated African-Americans.) A disproportionate percentage of African-Americans are in prison; African-Americans make up 12% of the U.S. population but nearly 46% of prison inmates. The long-lasting effects of criminal conviction and imprisonment (convicted felons are barred from many jobs), results in the disenfranchisement of many African-Americans - time spent in prison is time spent away from education and on-the-job training. As a result, many African-Americans end up unskilled and with criminal convictions and felony records.
Some scholars argue that the disproportionate imprisonment of African-Americans in the U.S. actually countermands the achievements of the civil rights movement. The criminal justice system in the U.S. is arguably a massive machine that results in the production of racial inequality. In fact, the inequality in the prison system is unmatched by that in other elements of society, as the following ratios indicate:
Another result of the disproportionate imprisonment of African-Americans in the U.S. is that imprisoned African-Americans are not included in calculations of unemployment rates. African-American are substantially more likely to be unemployed than European-American, but most statistics do not include prison populations.
Another illustration of the disparity in punishment in the US based on race is tied to the death penalty. Only 10% of individuals in the US sentenced to death are ever actually executed, but which ones are executed is intimately tied to race. Individuals convicted of killing a European-American are five times more likely to be executed than individuals who killed a racial or ethnic minority. This suggests that the criminal justice system in the US values white victims above minority victims.
All of these elements combined lead to the conclusion that one of the most racially disparate elements of U.S. society is the criminal justice system.
The chart below tracks homicide rates in the U.S. for the past 100 years. There has been an increase over time, though it is not clear whether that increase represents an actual increase in homicides or an increase in confounding factors, such as: stricter law enforcement, an increased willingness to report crimes, or changes in the definition of homicide itself.
The U.S. does not have the highest homicide rates in the world, but the rates in the U.S. are still relatively high compared to other countries (see chart).
Homicide rates also vary the age of the victim, as shown in the chart below. Note the similarity in homicide rates between the 14-17 age group and 18-24 age group.
Traditionally people have thought of crime being the result of negative characteristics of a neighborhood (e.g., low incomes, high residential turnover rates, etc.). However, there is some evidence to suggest that crime and the negative characteristics that were believed to cause high crime reinforce each other. This research finds that higher crime rates actually leads to more concentrated poverty, declining retail presence, and more residential turnover. This findings suggests that it is crime that drives away businesses and residents who have more money to safer areas, which then concentrates poverty and crime in that area.
Another area of current research that is of great interest to many people is the alleged effects of violent video games on behavior. Karen Sternheimer explains that a lot of the concern around video games is illustrative of the social construction of deviance. According to Sternheimer, "Politicians and other moral crusaders frequently create "folk devils," individuals or groups defined as evil and immoral. Folk devils allow us to channel our blame and fear, offering a clear course of action to remedy what many believe to be a growing problem. Video games, those who play them, and those who create them have become contemporary folk devils because they seem to pose a threat to children." (p. 13) The assumption is that playing violent video games will lead children to act out violently. However, there is a growing body of literature that is either inconclusive on this issue or that contradicts this assumption: it does not appear as though playing violent video games results in violent behavior.
The reason why the assumption that playing violent video games is not accurate is because it decontextualizes violence. Those who claim violent video games lead to violence fail to realize that violence is context dependent and most players of video games are fully aware of this. Individuals who play video games recognize that violence in the context of the game is okay and that it is not okay to be violent outside of that context. Additionally, many of the studies that have claimed to have found a connection between playing video games and violent behavior have failed to control for other influences on violent individuals, influences that are more likely to translate into violent behavior: neighborhood violence and instability, family violence, and even mental illness. Seldom is a connection made between adult shooting sprees in the workplace (which are far more common than school shooting sprees) and video games. Instead, people look toward contextual influences like those described above (i.e., job loss, family problems, etc.). In other words, violent video games are the folk devils for violent behavior in children, but not for adults.
Finally, the video game explanation is also illustrative of social-conflict and racial discrimination. Seldom is the explanation of a black violent offender's behavior playing violent video games. The assumption is that black culture encourages violence; as a result, violent behavior by young black men is not "shocking," so it does not require a folk devil to explain it. This is, of course, discriminatory. In contrast, it is generally white, middle-class violent offenders whose behavior is explained by alleging a video game connection. The fact that these violent offenders are white and middle class threatens the "innocence and safety of suburban America," (p. 17) which means it requires a folk devil culprit, absolving white, middle-class America of the blame.
One area of current research into deviance that highlights the socially constructed nature of norms is tattoos. In 2003 a survey found that 15% of the U.S. adult population had at least one tattoo; that number jumps to 28% for adults under 25. However, tattoos are a very complicated illustration of the intricacies of deviance. Many of the individuals getting tattoos do not fit the stereotypes of individuals who get tattoos (e.g., soldiers, sailors, bikers, etc.); many of those tattooing their bodies are high-achieving students. Additionally, tattooing is on the rise among women - 15% of women have tattoos while 16% of men do. Intriguingly, men and women get tattoos for different reasons. For men it is to reinforce their masculinity and for women it is to enhance their femininity. This differences illustrates another way in which gender is an action.
Another interesting aspect of tattoos is their changing meanings. While it is probably not the case that the meaning (think symbolic interactionism) of tattoos to those who get them is changing (tattoos have traditionally been used to express one's self or commemorate events), how tattoos are viewed is changing. In the early 1900s tattoos were so stigmatized that they could literally result in prosecution and served as evidence of ill intent and disrepute. Such extreme stigmatization is no longer the cause, probably because 88% of Americans know someone with a tattoo. As a result, tattoos are decreasingly seen as deviant. But tattoos also illustrate that deviance is not determined by the action but by those perceiving the action. While a tattoo may mean one thing to the person with the tattoo, it is still the case that others interpret the tattoo in a multitude of ways, and those don't always align with the intended meanings. Still today, tattoos are often equated with: drug use, troublemakers, and gang affiliation. This is probably why many people who get tattoos get them in locations that can easily be covered in regular business attire - that way they can still pass when not in the company of those who are more understanding of their tattoos. Tattoos, then, illustrate the sociological understanding of deviance quite well.
A race is a human population that is believed to be distinct in some way from other humans based on real or imagined physical differences. Racial classifications are rooted in the idea of biological classification of humans according to morphological features such as skin color or facial characteristics. An individual is usually externally classified (meaning someone else makes the classification) into a racial group rather than the individual choosing where they belong as part of their identity. Conceptions of race, as well as specific racial groupings, are often controversial due to their impact on social identity and how those identities influence someone's position in social hierarchies (see identity politics).
Ethnicity, while related to race, refers not to physical characteristics but social traits that are shared by a human population. Some of the social traits often used for ethnic classification include:
Unlike race, ethnicity is not usually externally assigned by other individuals. The term ethnicity focuses more upon a group's connection to a perceived shared past and culture.
The division of humanity into distinct races can be traced as far back as the Ancient Egyptian sacred text the Book of Gates, which identified four races according to the Egyptians. This early treatment merged racial and ethnic differences, combining skin-color with tribal and national identities. Ancient Greek and Roman authors also attempted to explain and categorize visible biological differences between peoples known to them. Medieval models of race mixed Classical ideas with the notion that humanity as a whole was descended from Shem, Ham and Japheth, the three sons of Noah, producing distinct Semitic, (Asian), Hamitic (African), and Japhetic (European) peoples. The first scientific attempts to categorize race date from the 17th century; these early attempts developed along with European imperialism and colonisation around the world.
In the 19th century a number of natural scientists wrote on race: Georges Cuvier, James Cowles Pritchard, Louis Agassiz, Charles Pickering, and Johann Friedrich Blumenbach. These scientists made three claims about race:
Races were distinguished by skin color, facial type, cranial profile and size, and texture and color of hair. Races were almost universally considered to reflect group differences in moral character and intelligence.
These early understandings of race were usually both essentialist and taxonomic; essentialism refers to unchanging and inherent characteristics of individuals and taxonomic refers to classificatory (also usually hierarchical) in nature. The advent of Darwinian models of evolution and Mendelian genetics, however, called into question the scientific validity of both characteristics and required a radical reconsideration of race.
The table below illustrates both how early definitions included essentialist and taxonomic elements and how definitions have changed over time.
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"An aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
Because racial differences continue to be important issues in social and political life, racial classifications continue. The United States government has attempted its own definitions of race and ethnicity (see for example U.S. Census) for such classifications and comparisons.
Biology and Genetics
Prejudice is, as the name implies, the pre-judging of something. Prejudice involves coming to a judgment on a subject before learning where the preponderance of evidence actually lies. Alternatively, prejudice can refer to the formation of a judgment without direct or actual experience. Prejudice generally refers to negative views of an individual or group of individuals, often based on social stereotypes. At its most extreme, prejudicial attitudes advocate denying groups benefits and rights without warrant and based solely on the unfounded views of the individual. It should be kept in mind that prejudice is a belief and may not translate into discrimination, which is the actual mistreatment of a group or individual based upon some criteria or characteristic. Although prejudice can lead to discrimination, the two are separate concepts.
Technically, prejudice should be differentiated from viewpoints accumulated through direct life experience. Such viewpoints or beliefs are not pre-judgments but post-judgments. If the assertion is made that no amount of experience ever entitles a person to a viewpoint then this precipitates a logical absurdity since anyone who opposes strongly-held views must, by their own definition, also be prejudiced, invalidating their own proposition on the grounds of... prejudice. Post-judgments or beliefs and viewpoints derived from experience that maintain unfair or stereotypical perspectives on a group of people is more accurately referred to as bias. Prejudice can be taught, socialized, or conveyed through other means, like mass media. Bias can develop through pronounced negative interactions with the stereotyped groups.
Both bias and prejudice are generally viewed as negative. However, some sociologists have argued that prejudices and biases can be seen as necessary human adaptations facilitating survival. In fact, there is evidence to suggest that humans have an innate or basic preference for people who are like them, specifically when it comes to race. Humans express more empathy when members of their own racial group experience pain compared to when individuals of other racial groups experience pain. This suggests prejudice and biases may have a biological component. Since humans do not always have sufficient time to form personal views on every other group of people, particularly people in opposition to one's own group(s), prejudices and biases may facilitate interactions (although negatively). Prejudice may also be detrimental to the individual personally by pre-judging a potential ally (e.g. refusing to patronize the only doctor in a town because he or she is black). Despite having innate preferences towards individuals who look like we do, there is substantial evidence that suggests most prejudicial attitudes and biases are learned and can be unlearned.
Racism can refer to any or all of the following beliefs and behaviors:
Racism is recognised by many as an affront to basic human dignity and a violation of human rights. Racism is opposed by almost all mainstream voices in the United States. A number of international treaties have sought to end racism. The United Nations uses a definition of racist discrimination laid out in the International Convention on the Elimination of All Forms of Racial Discrimination and adopted in 1965:
...any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise, on an equal footing, of human rights and fundamental freedoms in the political, economic, social, cultural or any other field of public life.
Expressions of Racism
Racism is usually directed against a minority population, but may also be directed against a majority population. The definition of a minority group can vary, depending on specific context, but generally refers to either a sub-group that does not form either a majority or a plurality of the total population, or a group that, while not necessarily a numerical minority, is disadvantaged or otherwise has less power (whether political or economic) than a dominant group. A majority is that segment of the population that outnumbers all others combined or one that is dominant.
The issue of establishing minority groups, and determining the extent of privileges they might derive from their status, is controversial. There are some who argue that minorities are owed special recognition and rights, while others feel that minorities are unjustified in demanding special rights, as this amounts to preferential discrimination and could hamper the ability of the minority to integrate itself into mainstream society (i.e. they may have difficulty finding work if they do not speak the predominant language for their geographic area).
The assimilation of minority groups into majority groups can be seen as a form of racism. In this process, the minority group sheds its distinctive traits and is absorbed into the dominant group. This presumes a loss of all characteristics which make the newcomers different. Assimilation can be voluntary or forced. Voluntary assimilation is usually the case with immigrants, who often adopt the dominant culture established earlier. Reasons that have been postulated for voluntary assimilation include:
Socially pressured to adapt, the immigrant is generally the one who takes the steps to integrate into the new environment. Learning the language of the country or region, making new friends, new contacts, finding a job or going to school. The adaptation is made more difficult when the immigrant does not speak the language of his or her new home.
Assimilation can have negative implications for national minorities or aboriginal cultures, in that after assimilation the distinctive features of the original culture will be minimized and may disappear altogether. This is especially true in situations where the institutions of the dominant culture initiate programs to assimilate or integrate minority cultures. Many indigenous peoples, such as First Nations of Canada, Native Americans of the US, Taiwanese aborigines, and Australian Aborigines have mostly lost their traditional culture (most evidently language) and replaced it with the dominant new culture.
An example of a minority population discriminating against a majority population is seen in the racial apartheid that existed until just recently in South Africa. South Africans of European descent (the minority) discriminated against the majority African population (the majority). Additional examples of minorities discriminating against majorities include two instances of colonial rule:
Racial discrimination is and has been official government policy in many countries. In the 1970s, Uganda expelled tens of thousands of ethnic Indians. Until 2003, Malaysia enforced discriminatory policies limiting access to university education for ethnic Chinese and Indian students who are citizens by birth of Malaysia. Today, many other policies explicitly favoring bumiputras (Malays) remain in force. Russia launched anti-Semitic pogroms against Jews in 1905 and after. During the 1930s and 1940s, attempts were made to prevent Jews from immigrating to the Middle East. Following the creation of Israel, land-ownership in many Israeli towns was limited to Jews, and many Muslim countries expelled Jewish residents, and continue to refuse entry to Jews.
While race itself is a social construction, race continues to play a prominent role in societies around the world. Race is often the basis for different types of stratification. Following are some of the ways society is stratified by race.
Race and Pollution
Devah Pager and Lincoln Quillian compared employers' responses on questions involving race-related hiring practices to their actual hiring practices by sending matched pairs of young men to apply for jobs, either both of European descent or both of African descent, but one of the men had a criminal record. Pager and Quillian found that employers claimed they would be much more willing to hire an ex-offender than they were. Additionally, while the survey results showed no difference in hiring preferences between African-Americans and European-Americans, employers were more than three times as likely to call job applicants with a European lineage back in comparison to Americans with an African lineage. In short, Pager and Quillian found that employers, in their survey responses, were more open to the idea of hiring both African-Americans and ex-offenders than they were to the actual practice.
Debates continue in and among academic disciplines as to how race should be understood. Some sociologists and biologists believe race is a social construct, meaning it does not have a basis in the natural world but is simply an artificial distinction created by humans. As a result of this understanding, some researchers have turned from conceptualizing and analyzing human variation by race to doing so in terms of populations, dismissing racial classifications altogether. In the face of the increasing rejection of race as a valid classification scheme, many social scientists have replaced the word race with the word ethnicity to refer to self-identifying groups based on shared religion, nationality, or culture.
The understanding of race as a social construct is well-illustrated by examining race issues in two countries, the U.S. and Brazil.
Constructing Race in the U.S.
Constructing Race in Brazil
The social constructionist approach has not completely occluded other perspectives. Some sociologists (and other researchers) still believe that race is a valid and useful measure when understood as fuzzy sets, clusters, or extended families.
Genetic data can be used to infer population structure and assign individuals to groups that often correspond with their self-identified geographical ancestry (e.g., African, Asian, etc.). Recent research indicates that self-described race is a very good indicator of an individual's genetic profile, at least in the United States. Using 326 genetic markers, Tang et al. (2005) identified 4 genetic clusters among 3,636 individuals sampled from 15 locations in the United States, and were able to correctly assign individuals to groups that correspond with their self-described race (white, African American, East Asian, and Hispanic) for all but 5 individuals (an error rate of 0.14%). They concluded that ancient ancestry/geography, which correlates highly with self-described race and not current place of residence, is the major determinant of genetic structure in the US population. The implications of such research are significant and can be seen in different health conditions prevalent among racial and ethnic groups (see extended discussion below).
In general, genetic variation within racial groups is greater than genetic variation between them. However, the existence of genetic differences among races is well accepted. In general, genetic clusters exist that correspond tightly to the census definition of race and to self-identified ancestry. Certain genetic conditions are more common among certain races. For example, approximately 1 in 29 individuals of Northern European descent are carriers of a mutation that causes cystic fibrosis, whereas only about 1 in 65 African Americans is a carrier (source). There is a subset of conditions for which individuals of Ashkenazi Jewish descent are at increased risk (see here). Based on this knowledge individuals can be offered genetic testing based on their race, which can determine whether they are at increased risk to have a child with one of these conditions. The association between race and genetics breaks down for groups, such as Hispanics, that exhibit a pattern of geographical stratification of ancestry.
There is an active debate among biomedical researchers about the meaning and importance of race in their research. Proponents of using race in biomedical research argue that ignoring race will be detrimental to the health of minority groups. They argue that disease risk factors differ substantially between racial groups, that relying only on genotypical classes - differences in genes - ignores non-genetic racial factors that impact health (e.g., poverty rates) and that minorities would be poorly represented in clinical trials if race were ignored. However, some fear that the use of racial labels in biomedical research runs the risk of unintentionally exacerbating health disparities, so they suggest alternatives to the use of racial taxonomies.
The primary impetus for considering race in biomedical research is the possibility of improving the prevention and treatment of diseases by predicting hard-to-ascertain factors on the basis of more easily ascertained characteristics. Indeed, the first medication marketed for a specific racial group, BiDil was recently approved by the U.S. FDA. A large study of African American males showed a 43% reduction in deaths and a 39% decrease in hospitalizations compared to a placebo. Interestingly, this drug would never have been approved if the researchers had not taken note of racial groups and realized that although the medication was not effective in previous clinical trials, it appeared to be effective for the small proportion of African-Americans males who were part of the study (source). Despite the controversy, it is clear that race is associated with differential disease susceptibility. Examples of some of these differences are illustrated in the table below.
|Disease||High-risk groups||Low-risk groups||Reference(s)|
|Obesity||African women, Native Americans South Asians, Pacific Islanders, Aboriginal Australians||Europeans||McKeigue, et al. (1991); Hodge & Zimmet (1994)|
|Non-insulin dependent diabetes||South Asians, West Africans, Peninsular Arabs, Pacific Islanders and Native Americans||Europeans||Songer & Zimmet (1995); Martinez (1993)|
|Hypertension||African Americans, West Africans||Europeans||Douglas et al. (1996); Gaines & Burke (1995)|
|Coronary heart disease||South Asians||West African men||McKeigue, et al. (1989); Zoratti (1998)|
|End-stage renal disease||Native Americans and African populations||Europeans||Ferguson & Morrissey (1993)|
|Dementia||Europeans||African Americans, Hispanic Americans||Hargrave, et al. (2000)|
|Systemic lupus erythematosus||West Africans, Native Americans||Europeans||Molokhia & McKeigue (2000)|
|Skin cancer||Europeans||Boni, et al. (2002)|
|Lung cancer||Africans, European Americans(Caucasians)||Chinese, Japanese||Schwartz & Swanson (1997); Shimizu, et al. (1985)|
|Prostate cancer||Africans and African Americans||Hoffman, et al. (2001)|
|Multiple sclerosis||Europeans||Chinese, Japanese, African Americans, Turkmens, Uzbeks, Native Siberians, New Zealand Maoris||Rosati (2001)|
|Osteoporosis||European Americans||African Americans||Bohannon (1999)|
Perhaps the best way to understand race is to recognize that the socially constructed boundaries and biological/genetic elements overlap. There are clearly biological differences between races, though they are small and, as noted above, there is greater variation within races than between races. But the actual criteria used for racial classifications are artificial and socially constructed, as was shown in the cases of the U.S. and Brazil.
By recognizing the overlap between the two, we are presented with a better understanding of race. However, distinctions between racial groups are declining due to intermarriage and have been for years. For instance, self-described African Americans tend to have a mix of West African and European ancestry. Shriver et al. (2003) found that on average African Americans have ~80% African ancestry. Likewise, many white Americans have mixed European and African ancestry; ~30% of whites have less than 90% European ancestry. If intermarrying of races and ethnicities continues, the biological and genetic distinctions will grow increasingly minute and undetectable. If a completely heterogeneous population ultimately develops, any racial classifications in that population would be nothing more than social constructs.
Controversies surrounding the definition of race will likely continue for some time. But there are important considerations that go beyond the definition of race. Race and race-related issues continue to impact society. Racial discrimination in employment and housing still occurs (source). Because race remains a significant factor in social life, sociologists feel compelled to study its effects at multiple levels.
Since the early days of the United States, Native Americans, African-Americans and European-Americans were classified as belonging to different races. But the criteria for membership in these races were radically different. For Africans, the government considered anyone with African appearance to be purely African. Native Americans, on the other hand, were classified based on a certain percentage of Indian blood. Finally, European-Americans had to have purely white ancestry. The differing criteria for assigning membership to particular races had relatively little to do with biology; it had far more to do with maintaining a group's defined roles and position.
Some researchers and historians have proposed that the intent of the differing criteria for racial designations was to concentrate power, wealth, privilege, and land in the hands of European-Americans. As a result, the offspring of an African slave and European master or mistress would be considered an African. Significant in terms of the economics of slavery, the mixed-race child of a slave mother also would be a slave, adding to the wealth of the slaveowner.
Contrast the African criteria with that of Native Americans; a person of Native American and African parentage automatically was classified as African. But the offspring of only a few generations of Native Americans and Europeans were not considered Indian at all - at least not in a legal sense. Native Americans had treaty rights to land, but individuals with only one Indian great-grandparent were no longer classified as Native American, disenfranchising them from their claims to Native American lands. Of course, the same individuals who could be denied legal claim to Native American lands because they were too White, were still Native American enough to be considered half-breeds and were stigmatized as a result.
In an economy benefitting from slave labor, it was useful to have as many African slaves as possible. Conversely, in a nation bent on westward expansion, it was advantageous to diminish the numbers of those who could claim title to Indian lands by classifying them out of existence. Both schemes benefitted the third group, the racially pure whites. The point being, of course, that the classifications of race in the early U.S. were socially constructed in a fashion that benefitted one race over the others.
The earliest blacks in the U.S. were brought from Africa as slaves primarily to help in agricultural in the southern U.S. While migration since then has substantially altered the distribution of African Americans in the U.S., African Americans remain concentrated in the southern U.S., as depicted in the map below.
Native Americans in the U.S. are concentrated on reservations, as depicted in the map below:
The Brazilian Institute of Geography and Statistics classifies the Brazilian population in five categories: white, black, pardo or (brown), yellow, and Indigenous, based on skin color as given by the individual being interviewed in the census.
Of particular interest to the discussion of race in this chapter is the fact that there is a racial classification that falls between "white" and "black": "pardo" or "brown." That Brazilians have more racial classifications than do people in the United States illustrates the socially constructed nature of race. Additionally, racial classification in Brazil, because it is based on self-classification and there are no objective criteria for what it means to belong to one race or another, is inconsistent about 21% of the time. Because of the mixing of the races, race is not inherited but determined purely by physical characteristics (i.e., a white father and black mother could have a "white", "black," or "pardo" child). Additionally, because race is self-determined and there is discrimination based on race (white are favored), Brazilians have a tendency to "self-lighten," or report their race as being lighter than an independent observer may suggest. That people can "self-lighten" illustrates that race is not a fixed construct but rather that it is socially constructed.
Racism may be expressed individually and consciously, through explicit thoughts, feelings, or acts, or socially and unconsciously, through institutions that promote inequalities among races.
Individual-level racism is prejudice, bias, or discrimination displayed in an interaction between two or more people. Examples of individual-level racism could include:
Children develop an awareness of race and racial stereotypes quite young (between the ages of 5 and 11) and these racial stereotypes affect behavior. For instance, children who identify with a racial minority that is stereotyped as not doing well in school tend to not do well in school once they learn about the stereotype associated with their race. Another illustration of individual-level racism in society is the resistance of Americans to classify mixed-race individuals as white if they have even "one-drop" of black ancestry. While most Americans may believe the "one-drop rule" is no longer relevant in society today, recent research suggests that it persists in racial classifications, even if they are informal.
Structural racism refers to inequalities built into an organization or system. An example of structural racism can be seen in recent research on workplace discrimination. There is widespread discrimination against job applicants whose names were merely perceived as "sounding black." These applicants were 50% less likely than candidates perceived as having "white-sounding names" to receive callbacks for interviews, no matter their level of previous experience. The researchers view these results as strong evidence of unconscious biases rooted in the country's long history of discrimination. This is an example of structural racism as it shows a widespread established belief system that treats people differently based upon their race. Additional examples of structural racism include apartheid in South Africa, the system of Jim Crow laws in the U.S., and the inequitable lending practices of banks (i.e., redlining). The figure below illustrates structural racism by illustrating how blacks and Hispanics, even when they have the same income as whites, are less likely to be approved for home mortgages (as a result of practices like redlining).
Cultural racial discrimination, a variation of structural racism, occurs when the assumption of inferiority of one or more races is built into the culture of a society. In this perspective, racism is an expression of culture and is also passed on through the transmission of culture (i.e., socialization). An interesting twist on this type of prejudice can be seen in how high achieving secondary school students are treated. African American and Native American students with high GPAs are rejected by their peers while Asian American and white students with high GPAs experience greater social acceptance. This suggests that different racial and ethnic groups are rewarded for academic achievement while others are punished, potentially leading to members of those groups to pursue academic success while others are discouraged from doing so.
Historical economic or social disparity is a form of inequality caused by past racism, affecting the present generation through deficits in the formal education and other kinds of preparation in the parents' generation, and, through primarily unconscious racist attitudes and actions on members of the general population. This perspective argues that African-Americans, in particular, in the U.S. have had their opportunities in life adversely affected due to the mistreatment of their ancestors (see slavery). Disparities in wealth, net worth and education lend credence to this idea. The figure below illustrates how historical racism has resulted in lower odds of inter-generational transmission of wealth, which, in turn, reduces net worth for racial and ethnic minorities in the U.S.
One response to racial disparity in the U.S. has been Affirmative Action. Affirmative Action is the practice of favoring or benefiting members of a particular race in areas such as college admissions and workplace advancement, in an attempt to create atmospheres of racial diversity and racial equality. Though lauded by many as a boon to society, giving the less privileged a chance at success and working to overcome historical social disparity, the practice is condemned as racially discriminatory by others.
Another type of racism is racial profiling. Racial profiling involves the singling out of individuals based upon their race for differential treatment, usually harsher treatment. Two examples of racial profiling in the United States are often discussed. The disparate treatment of minorities by law enforcement officials is a common example of racial profiling. Another example is the disparate treatment of young, male Arabs in airports who are more likely to be subjected to extensive screening. Many critics of racial profiling claim that it is an unconstitutional practice because it amounts to questioning individuals on the basis of what crimes they might commit or could possibly commit, instead of what crimes they have actually committed.
In the US, the avoidance of racial language by European-Americans has been used to suggest that racism is no longer an issue. However, the continued prevalence of institutional racism has led some scholars like Bonilla-Silva to argue a "new racism" exists, that has arisen during the post-Civil Rights era. Bonilla-Silva suggests that a "color-blind racism" ideology supports racism while avoiding any reference to race.
While not exclusively the result of racial or ethnic tension, genocide, the attempt to completely destroy a group of people based on a characteristic they share by another group of people who do not share that characteristic, is often the result of racism. One technique that is often used by individuals engaged in genocide and even in war is racial epithets that dehumanize the enemy, making it easier to kill them.
Pollution and polluting facilities are not evenly distributed in the U.S. Communities made up predominantly of racial minorities are significantly more likely to be polluted and to house factories and business that pollute extensively. While it might seem that this is inadvertent and not intentionally racist, the evidence suggest otherwise: these communities are systematically targeted as locations for situating polluting businesses.
Not until 1967 were laws outlawing interracial marriage abolished in the United States. Prior to that time, an individual from one race who married an individual from another could be jailed and fined. These laws were referred to as miscegenation laws (miscegenation means "mixing races"). This was the experience of Mildred and Richard Loving, who married in 1958 in Washington D.C., a district in the US that no longer had a law against interracial marriage. Mildred was black; Richard was white. When they moved to Virginia shortly after their wedding, law enforcement decided to prosecute them, breaking into their home in the middle of the night and carrying them off to jail. Both Mildred and Richard were from Virginia, where their extended family still lived. The judge who heard their case, Leon M. Bazile, told the Lovings during their trial for miscegenation that, 'if God had meant for whites and blacks to mix, he would have not placed them on different continents.' He also seemed to take pride in telling the Lovings, "as long as you live you will be known as a felon." The Lovings eventually contacted the American Civil Liberties Union, who took their case to the Supreme Court in 1967, resulting in Loving v. Virginia, which abolished miscegenation laws in the U.S. Even so, as the diagram to the right indicates, attitudes toward interracial marriage did not immediately improve. Still as late as 2002, close to 10% of people in the U.S. favored a law prohibiting interracial marriage.
Sociologists make a distinction between gender and sex. Gender is the perceived or projected component of human sexuality while sex is the biological or genetic component.
Why do sociologists differentiate between gender and sex? Differentiating gender from sex allows social scientists to study influences on sexuality without confusing the social and psychological aspects with the biological and genetic aspects. As discussed below, gender is a social construction. If a social scientist were to continually talk about the social construction of sex, which biologists understand to be a genetic trait, this could lead to confusion.
While much of this chapter focuses on the socially constructed differences between men and women, it is also important to note there are some clear physiological differences between the two sexes. In addition to different sex organs and sex chromosomes, the average male is 10 percent taller, 20 percent heavier, and 35 percent stronger in the upper body than the average female Some researchers believe that these physiological differences may have been influenced by social/cultural decisions in our evolutionary past. Even so, when measured against their own body size, rather than on an absolute scale (e.g., how much women can carry relative to their body size versus how much men can carry relative to their body size), actual strength differences are minimal.
Women, for reasons still somewhat undetermined, tend to outlive men. Women's life expectancy in the U.S. is 79.8 years; men's is 74.4. Some believe this difference is due to the riskier lifestyles of men, especially earlier in life, combined with their typically more physically stressing occupations.
Behaviorally, age of sitting, teething, and walking all occur at about the same time in men and women. However, men enter puberty on average two years later than do women. There are no significant differences in intelligence, happiness, or self-esteem between men and women. However, women are, statistically, twice as vulnerable to anxiety disorders and depression, but only one-third as vulnerable to suicide and one-fifth as vulnerable to alcoholism. Women attempt suicide more often than men but have lower rates of "success", because their preferred methods do not involve firearms, unlike men. Women are also less likely to suffer hyperactivity or speech disorders as children or to display antisocial personalities as adults. Finally, women have slightly more olfactory receptors on average and are more easily re-aroused immediately after orgasm.
Much evidence has shown that there are differences in male and female brains. In fact, the temporal lobe, which is the part of the brain associated with language and emotion, develops up to 4 years earlier in girls in comparison to boys. On the other hand, the left parietal lobe, which is associated with mathematical and spatial reasoning, is thought to develop up to 4 years earlier in boys. This difference could account for the fact that girls are sometimes thought to be better when it comes to language and are more emotional, while boys are thought to be better in math. As well, some say that girls are better at hearing than boys. A typical teenaged girl hears up to 7 times better than a typical teenaged boy. This could possibly explain why boys are diagnosed with ADHD more often. Lastly there is a difference between sight for girls and boys. Girls are able to see facial expressions / emotions better while boys are able to see motion better. Girls use the p-cells in the retina, which are associated with texture and color, while boys use m-cells, which are associated with motion.
As the previous section outlined, some gender differences are attributable to biology. However, there are a number of gender differences that vary by society, environment, and/or culture, indicating they are social constructions. For example, in work group situations in the U.S., men tend to focus on the task at hand whereas women tend to focus more on personal relationships. When eating, women eating with men tend to eat fewer calories than when they are eating with women. Both of these differences in behavior vary by culture and are therefore believed to be socially constructed. Two detailed examples of socially constructed gender differences are presented below: workforce differences and education.
Work and Occupations
Sexism is discrimination against people based on their sex or gender. Sexism can refer to three subtly different beliefs or attitudes:
Many peoples' beliefs on this topic range along a continuum. Some people believe that women should have equal access to all jobs. Others believe that while women are superior to men in a few aspects, in most aspects men are superior to women.
Sexist beliefs are an example of essentialist thought, which holds that individuals can be understood (and often judged) based on the characteristics of the group to which they belong; in this case, their sex group (male or female). Essentialism assumes that all individuals clearly fit into the category of male or female, which is not the case. It also assumes characteristics are immutable, which is also not the case.
A good example of sexism against women is a question that has been asked in numerous surveys over the years in the US, "Would you vote for a female candidate for president?" A 2005 Gallup poll found that 92% of Americans would vote for a female candidate, but follow-up research found that this percentage was the result of response bias. When you use research techniques that allow people to express how they really feel toward women, the actual percentage who would not vote for a female candidate because she is female is closer to 26%. Intriguingly, it is not just men who feel that way, but some women, too. In short, nearly 1/4 of Americans maintain sexist attitudes against women.
Sexism against women is often called chauvinism, though chauvinism is actually a wider term for any extreme and unreasonable partisanship toward a group to which one belongs, especially when the partisanship includes malice and hatred towards a rival group. Many forms of radical feminism can legitimately be referred to as chauvinism. This is not common usage, however, and the term is most often used to refer to male chauvinism.
While the view that women are superior to men is also sexism, only in recent years has an awareness of this reverse sexism begun to develop in public discourse. Certain forms of sexual discrimination are illegal in many countries, but nearly all countries have laws that give special rights, privileges, or responsibilities to one sex.
Recent research illustrates the pervasiveness of sexism in the media. Messner et al. found that sports coverage on major television networks focuses predominantly on men, despite the increase in female participation in sports since the passage of Title IX in 1972. In 1971, 294,000 high school girls played interscholastic sports, compared to 3.7 million boys. By 1989 that ratio changed substantially - 1.8 million girls played sports compared to 3.4 million boys. By 2004 the ratio had changed even more - 2.9 million girls compared to 4.0 million boys. At the collegiate level, the change was also substantial. In 1972, the average college in the U.S. had two women's sports teams. In just the four years between 2000 and 2004, universities in the U.S. added 631 new women's teams.
Despite the increase in participation in sports, major network news coverage of women's sports has changed very little over the last 15 years. In 1989 women garnered only 5% of air time; in 1999 that increased to 9%, but it fell back to 6% by 2005. Sports highlights shows (e.g., ESPNS's SportsCenter) are even less accommodating, giving only 2% to 3% of air time to women. What's more, the little amount of air time given to women often portrays women's sports as "novelties" or pseudo-sports and often includes gags, like the women's nude bungee jump in 1999. Additionally, much of the coverage of women in sports is sexualized, as attention is often only given to women deemed "attractive" by the news anchors (e.g., Anna Kournikova and Maria Sharapova). Whether this treatment of women in sport is intentional or not, it is a clear example of sexism in the media.
Another example of gender discrimination is the disparity in wealth between men and women. Using biographical data published in magazines and books as well as IRS income reports, Tickamyer found:
The author attributed the differences in wealth distribution to historical instances of gender discrimination. Up until the 19th Century most women could not own property and women's participation in the paid labor force outside the home was limited. It is possible that wealth among the elite may be redistributed toward a more equal balance between the sexes with increasing numbers of women entering the workforce and moving toward more financially lucrative positions in major corporations.
The differences in income between men and women mentioned above are partially due to discrimination, but also due, in part, to some women (including highly educated women) choosing to leave the labor force and stay home with their young children. Leaving the labor force doubly impacts income: (1) It takes away immediate income, and (2) reduces experience and tenure, lowering future earning potential. Additionally, while women have made significant inroads into many highly paid fields (e.g., medicine, law), the influx of women into those fields has slowed since 2000.
Women in some organizations are suing their employers claiming gender discrimination.
For instance, Wal-Mart is currently facing a lawsuit by some of its female employees who allege gender discrimination. Part of the plaintiffs' argument rests on the fact that, while roughly 75% of intra-store department heads are women, only 20% of store managers (who make close to $100,000 per year) are women. It is difficult to prove discrimination in such cases. In fact, many researchers point out that there may and probably are other root causes, including: differences in gender socialization (men believe they need to support their families as the primary breadwinners, leading to greater job commitment) and emphasis by the government on equality in pay and opportunity between genders.
Sexism can take many forms, including preventing women from attending college and paying women less than men for comparable work. Another common form of sexism is violence, especially violence toward women. In 2002, women were the victims of over 900,000 violent crimes and over 200,000 rapes or sexual assaults.
Men are more likely to be the victims of violent crime, but far less likely to be the victims of rapes or sexual assaults.
A powerful example of how gender affects every day life comes from the recently published research of Kristen Schilt on female-to-male (FTM) transexuals in the workplace. Schilt interviewed FTM transsexuals after they had undergone their sexual identity changes and found that, following their change to a male identity, two-thirds of the FTM transsexuals saw increased benefits in the workplace, including receiving greater rewards for doing less work. They were also treated differently. They found that their opinions had greater authority and received more recognition for their work. The FTMs who did not experience these benefits tended to be smaller and minorities. In short, white males are privileged in the workplace, even when those "white males" were formerly white females. The lesson: Perceived gender has a powerful influence on every day social interaction.
Many species of living things are divided into two or more categories called sexes. These refer to complementary groups that combine genetic material in order to reproduce, a process called sexual reproduction. Typically, a species will have two sexes: male and female. The female sex is defined as the one which produces the larger gamete (i.e., reproductive cell) and which bears the offspring. The categories of sex are, therefore, reflective of the reproductive functions that an individual is capable of performing at some point during its life cycle, and not of the mating types, which genetically can be more than two.
In mammals (and many other species) sex is determined by the sex chromosomes, called X and Y. For mammals, males typically have one of each (XY), while females typically have two X chromosomes (XX). All individuals have at least one X chromosome, the Y chromosome is generally shorter than the X chromosome with which it is paired, and is absent in some species. In humans, sex is conventionally perceived as a dichotomous state or identity for most biological purposes, such that a person can only be female or male.
Gender is the socially constructed component of human sexuality. Gender is an inner feeling that you are male, female, both, neither, or somewhere in between. Perhaps the best way to understand gender is to understand it as a process of social presentation. Because gender roles are delineated by behavioral expectations and norms, once individuals know those expectations and norms, the individual can adopt behaviors that project the gender he/she wishes to portray. One can think of gender like a role in a theatrical play - there are specific behaviors and norms associated with genders just like there are lines and movements associated with each character in a play. Adopting the behaviors and norms of a gender leads to the perception that someone belongs in that gender category. Gender roles are, unlike sex, mutable, meaning they can change. Gender is not, however, as simple as just choosing a role to play but is also influenced by parents, peers, culture, and society.
Some examples may help illustrate the distinction between gender and sex. Parents may socialize a biological boy (XY chromosomes) into what is perceived as a traditionally masculine role, that includes characteristics like: independence, courage, and aggressiveness. Likewise, parents may socialize a biological female (XX chromosomes) into what is perceived as a traditionally feminine role, that includes characteristics like: submissiveness, emotionality, and empathy. Assuming both children feel like their gender roles fit their identities, the masculine boy and feminine girl will behave in ways that reflect their genders. For instance, the boy may play with toy soldiers and join athletic teams. The girl, on the other hand, may play with dolls and bond with other girls in smaller groups.
|feminine characteristics||masculine characteristics|
However, gender is fluid and can change. This can be seen by continuing the above example. It is possible for the boy to decide later in life that he no longer wishes to portray himself as traditionally masculine. The boy may adopt some traditionally feminine characteristics and become androgynous, or may adopt a feminine persona altogether (see the photos of cross-dressing drag queens for an example of this type of gender construction). Either change would involve adopting the behaviors and norms that go along with the intended gender. The same is true for the girl, who may adopt masculine characteristics.
A significant proportion of the human population does not correspond exclusively to either female or male genders or sexes. When gender identity and biological sex conflict, the result is sex discordance. Some discordances are purely biological, such as when the sex of the chromosomes (genetic sex) does not match the sex of the external genitalia (anatomic sex). For more extensive discussion of this type of discordance, see this article on intersex.
Discordances between the biological (sex) and psychosocial (gender) components of gender, such as when the gender does not match the anatomic sex, are even more common but less well understood. The vast majority of people who are discordant in some aspect of psyche or behavior do not have any detectable biological intersex condition. Human societies respond to, or accommodate, these behavioral and psychological discordances in many different ways, ranging from suppression and denial of difference to acknowledging various forms of third sex (see the Kothoey pictured above).
Some societies identify youths with atypical behavioral characteristics and, instead of giving them corrective therapy or punishing them, socialize them in such a way that their individual characteristics let them provide a useful function for the society in a recognized and respected role. Some of the roles these individuals may assume include: shaman, medicine man, tong-ki, berdache, hijra, xanith, and transgender.
Gender discordance leads to the understanding that what we traditionally understand to be feminine and masculine characteristics are social (and cultural) constructions. Some people have sought to define their sexuality and sexual identity in non-polar terms in the belief that the simple division of all humans into males and females does not fit their individual conditions. A proponent of this movement away from polar oppositions, Anne Fausto-Sterling, once suggested we recognize five sexes: male, female, merm, ferm and herm. Although quickly rejected as a bizarre flouting of human nature and social reality and inimical to the interests of those whom she was attempting to champion, it expresses the difficulty and imperfection of the current social responses to these variations.
An often discussed and debated difference between men and women involves work and occupations. Women's participation in the workforce has varied significantly over time. Prior to the development of capitalism and factory-type work, women played a significant role in food production and household maintenance. With the advent of capitalism and labor outside of the home, women continued to play a significant role, though their participation in paid labor outside the home initially diminished. Also, women's participation in the labor force varied (and varies) depending on marital status and social class.
Current U.S. labor force statistics illustrate women's changing role in the labor force. For instance, since 1971, women's participation in the labor force has grown from 32 million (43.4% of the female population 16 and over) to 68 million (59.2% of the female population 16 and over). Women also make, on average, $17,000 less than do men. Women tend to be concentrated in less prestigious and lower paying occupations that are traditionally considered women's jobs (also referred to as pink collar jobs). Finally, women are not paid the same wages as men for similar work. This difference is often illustrated as a ratio, as shown in the graph below. Women tend to make between 75% and 91% of what men make for comparable work, though it depends on how the comparison is made. For instance, college educated women between 26 and 45 earned 74.7 cents in hourly pay for every dollar men in the same group made in 2005. However, if you compare women and men with similar profiles and qualifications, the gap is smaller: women make about 91% of what men make, at least they have since the 1980s. In the 1970s, similarly qualified women made only 82% as much as their male counterparts.
However, at all educational and skill levels, women still make less than men, as illustrated in the figure below. That women earn less than men with equal qualifications helps explain why women are enrolling in college at higher rates than are men - they require a college education to make the same amount as men with a high school diploma.
The gap between men's and women's wages narrowed during the 1980s and mid 1990s, but that momentum has fallen off and the distance now appears to have stagnated. The gap in income between genders used to be similar between middle-class and affluent workers, but it is now widest among the most highly paid. A woman making in the 95th percentile in 2006 would earn about $95,000 per year; a man in the 95th earning percentile would make about $115,000, a 28% difference (and that's not including the highest earners, who are predominantly men). The narrowing of the gap in pay has also been called into question. While it appears there has been a narrowing of the gap in pay between men and women, Mulligan and Rubinstein show that much of the narrowing is actually the result of the most able women entering the workforce and not decreases in the pay gap between men and women. Thus, even the apparent narrowing of pay between the sexes likely overestimates the actual differences in pay.
It is quite difficult for women to climb to the top in the business world. For instance, only 3% of tech firms and just 1% of high-tech firms were founded by women and very few are headed by women. But the women who do climb to the top of the organizational ladder in business also experience both overt and covert discrimination. For instance, companies with women on the board of directors have lower stock evaluations than do companies with exclusively male boards. This is likely a reflection of the lack of shareholder trust in women. Women are also often put into leadership positions in corporations when companies are in a crisis and have little hope for recovery, resulting in poorer evaluations of women in leadership positions. The phenomenon of putting women into leadership positions when companies are in trouble is referred to as "the glass cliff" and is also observed in politics, as women are disproportionately chosen to run in elections when it is almost guaranteed that the incumbent male candidate will win.
The most common explanation for the wage gap between men and women is the finding that women pay a motherhood wage penalty, regardless of whether or not they are actually mothers. You can think about this from the perspective of a potential employer: If you have two equally qualified candidates for a position, both are in their mid-twenties, married, and straight out of college, but one is a male and the other is female, which would you choose? Many employers choose men over women because women are "at risk" of having a child, even though they may not want to have children. And, of course, to the potential employer accommodating a pregnant woman and mother is more cumbersome than a male turned father (despite the obvious need for children to continue our species). Thus, women pay a penalty for their ability to give birth. Additionally, when women do have children, this often requires a period of time outside the workforce, whether it's six weeks or several months. Employers take the time off into account when considering raises. The "Mommy track" often results in women making less money than equally qualified men who have been in the same job for the same amount of time because women take time off to have children and are often responsible for taking care of children while men rarely do so. Thus, women are often paid less despite having the same qualifications because they are (1) at risk of having children or (2) do have children and are penalized for doing so.
Another possible explanation for the wage gap between men and women has recently been proposed - customer bias towards white males. Hekman et al. (2009) found that customers who viewed videos featuring a black male, a white female, or a white male actor playing the role of an employee helping a customer were 19% more satisfied with the white male employee's performance and also were more satisfied with the store's cleanliness and appearance, despite the fact that all three actors performed identical, read the same script, and were in the exact same location with identical camera angles and lighting. They provide further evidence to support this claim by noting that white male doctors are rated as more approachable and competent than other doctors. They interpret their findings to suggest that employers are willing to pay more for white male employees because employers are customer driven and customers are happier with white male employees. They also suggest that what is required to solve the problem of wage inequality isn't necessarily paying women more but changing customer biases. Additional reasons for disparity in pay are discussed below.
Another factor that may contribute to the higher wages of white men is the number of job leads they receive. White men, particularly those in management positions, receive more job leads from friends and colleagues than do white women and Hispanic men and women. Black men and women receive about as many job leads and tips, but only for non-management jobs. As many jobs result from social networking, white males are advantaged by their higher number of job leads, potentially contributing to their higher salaries and more prestigious jobs.
Another often studied difference between men and women is educational attainment. For a long time, higher education (undergraduate and graduate education) was an exclusively male bastion. Women did eventually gain access to institutions of higher learning, but parity or equality on a number of levels has still not been achieved. One measure of educational attainment where women have made great inroads is in college attendance. In 1960, 37.9% of female high school graduates enrolled in college, compared with 54.0% of male high school graduates. In 2002, more female high school graduates were enrolling in college than males, 68.4% of females vs. 62.1% males. Women have, in fact, made significant progress in this respect. Women now earn more Bachelors and Masters degrees than do men, and for the first time in 2009, they earned more PhDs. Women have made significant inroads into some of the traditionally most prestigious professions as well: 40% of medical school graduates are women and women make up large percentages of law school students as well.
Despite the progress, there are still problems. While women are entering college at higher rates and even earning more degrees, the degrees are in less prestigious areas (e.g., social sciences and humanities compared to physical sciences) and women with degrees still earn less than do men with comparable degrees. For instance, in medicine, women tend to concentrate in lower paying specialties (e.g., dermatology and family medicine). The highest paid specialties are dominated by men and will be for decades to come, based on the pipeline of residents: 28% of radiology residents in 2004-5 were women, and only 10% of orthopedic surgery residents were.
At the primary and secondary levels, girls don't often do as well as boys, particularly in math and the sciences. One recent study offers a partial explanation for why this might be the case: highly math-anxious female teachers in elementary school pass their math-anxiety on to the girls in the classroom, but not to the boys. At the beginning of the class, there were no differences in math anxiety between the boys and girls, but in classes taught by female math-anxious teachers, girls developed math anxiety and boys did not. This anxiety led girls to believe boys were better at math than girls, though there is no evidence to suggest that is actually the case.
Sociologists and other social scientists generally attribute many of the behavioral differences between genders to socialization. As discussed in the chapter on socialization, socialization is the process of transferring norms, values, beliefs, and behaviors to future group members. In gender socialization, the groups people join are the gender categories, males and females. Thus, gender socialization is the process of educating and instructing potential males and females as to the norms, behaviors, values, and beliefs of group membership.
Preparations for gender socialization begin even before the birth of the child. One of the first questions people ask of expectant parents is the sex of the child. This is the beginning of a social categorization process that continues throughout life. Preparations for the birth often take the infant's sex into consideration (e.g., painting the room blue if the child is a boy, pink for a girl). Many of the gender differences just described are attributed to differences in socialization, though it is possible genetic and biological factors play some role. It is important to keep in mind that gender differences are a combination of social and biological forces; sometimes one or the other has a larger influence, but both play a role in dictating behavior.
One illustration of early life gender socialization can be seen in preschool classrooms. Children in preschool classrooms where teachers were told to emphasize gender differences saw an increase in stereotyped views of what activities are appropriate for boys and girls, while children with teachers who did not emphasize gender showed no increase. This study supports the idea that subtle cues that surround us in our everyday lives strongly influence gender socialization.
Research finds that gender differences in work and occupations begin with adolescents' first jobs:
Researchers attribute these differences to gender socialization and differential opportunities for boys and girls.
Another example of research finding differences in behavior between genders can be seen in the differences in self-ratings of attractiveness. Using fifty-five Johns Hopkins University undergraduates (24 females), the authors had the students fill out questionnaires they designed as self-appraisals of attractiveness. The authors then used a panel to rate the attractiveness of the participants (an objective measure). The researchers found that women are fairly accurate in their assessments of their attractiveness but men are not. They explained their findings by discussing the salience of attractiveness for women, a characteristic learned through socialization: Attractiveness is a more important component of women's lives then men's. This is seen in the disparity between men and women in the number of cosmetic surgeries they undergo. Of the 11.5 million cosmetic surgeries performed in 2005, women accounted for 85% to 90% of them. Because attractiveness is so important for women, they are more attuned to their actual attractiveness than are men.
Sociobiologists and evolutionary psychologists argue that much of social life as we know it today has roots in human evolution and biology. According to these theories, some of the gender differences in behavior are attributable to differences in physiology. For instance, differences in sexuality and sex drives may be due to human evolution. Women, who physically invest more in the creation and bearing of children (through pregnancy), may have a greater propensity toward monogamous relationships as having a partner to help them improves the chances of their child's survival. Men, on the other hand, may be inclined less toward monogamy and more toward polygamous relationships as their investment in offspring can be (and often is) far smaller than that of women. Evolutionary psychologists and sociobiologists use this theory to explain differences in sexual behavior, attitudes, and attractions between men and women: women tend to be attracted to men who can provide support (i.e., protection and resources) and prefer fewer sexual partners than do men; men, on the other hand, are attracted to fertile women (the symbols of which have changed over time) and prefer more sexual partners.
In this perspective, which was developed in the 1940s and 1950s, genders are viewed as complementary - women take care of the home while men provide for the family. Much current research, especially after the women's movement of the 1960s and 1970s, criticizes this approach for supporting the status quo and condoning the oppression of women.
In contrast to the status quo supporting structural functionalist approach, social conflict theory argues that gender is best understood in terms of power relationships. Men's dominance of women is seen as an attempt to maintain power and privilege to the detriment of women. This approach is normative in that it prescribes changes to the power structure, advocating a balance of power between genders. (See also feminist theory.)
Stratification affects our everyday lives.
Stratification refers to the hierarchical arrangement of people in a society. This chapter focuses on economic stratification; meaning how people are differentiated based upon their wealth (and/or power). Sociology has a long history of studying stratification and teaching about various kinds of inequality, including economic inequality, racial/ethnic inequality, gender inequality, and other types of inequality. Inequality means people have unequal access to scarce and valued resources in society. These resources might be economic or political, such as health care, education, jobs, property and land ownership, housing, and ability to influence government policy.
Statistics on United States and global inequality are widespread and alarming. Consider this:
Although inequality is everywhere, there are many controversies and questions about inequality that sociologists are interested in such as where did inequality come from? Why does it continue? Do we justify inequality? Can we eliminate inequality? Can we make a society in which people are equal? Before answering these complex questions, we will broadly define socioeconomic status and social class in America. The chapter then turns to dominant theories on stratification, and explores class, race, and gender inequality in more detail. We look at how capitalism is an important context in inequality. We end with consequences of inequality and theories explaining global inequality.
Building on the ideas of Max Weber, who saw three main dimensions of stratification (class, status, and party), contemporary sociologists often define stratification in terms of socioeconomic status (or SES). There are a variety of ways to measure SES, including educational attainment, income, wealth, and occupational prestige. These measures reflect three characteristics of individuals: power, property, and prestige. These three characteristics combine to indicate someone’s social class or socioeconomic status.
Power refers to someone’s ability to get others to do his/her will, regardless of whether or not they want to. Legitimate power, power given to individuals willingly by others, is called authority. Illegitimate power, power taken by force or the threat of force, is called coercion.
Property, as used in this context, refers to the sum total of one’s possessions as well as their regular income. Property goes beyond income as a measure of social class as it reflects the accumulated wealth (e.g., homes, stocks, bonds, savings) in addition to one’s earning potential. Property is a better overall measure of social class than income as many individuals who are considered wealthy actually have very small incomes.
Prestige refers to the reputation or esteem associated with one’s position in society. Prestige used to be associated with one's family name, but for most people in developed countries, prestige is now generally tied to one's occupation. Occupations like physicians or lawyers tend to have more prestige associated with them than occupations like bartender or janitor. An individual’s prestige is closely tied to their social class – the higher the prestige of an individual (through their occupation or maybe family name), the higher the social class.
These three indicators tend to go hand-in-hand or lead to each other, such as a Supreme Court justice who is usually wealthy, enjoys a great deal of prestige, and exercises significant power. In some cases, however, a person ranks differently on these indicators, such as funeral directors. Their prestige is fairly low, but most have higher incomes than college professors, who are among the most educated people in America and have high prestige.
Sociologists Dennis Gilbert and Joseph Kahl developed a model of the American class social class structure. Briefly, the upper class in America (3% of the population) is divided into upper-upper class (1% of the U.S. population), earning hundreds of millions to billions in income per year while the lower-upper class (2%) earns millions in annual income. The middle class (40%) is divided into upper-middle class (14%) earning $76,000 or more per year while the lower-middle class (26%) earns $46,000 to $75,000. The working class (30%) earns $19,000 to $45,000. The lower class (27%) is divided into working poor (13%, earning $9000 to 18,000) and underclass (14%, earning under $9000).
Among America’s working class and working poor are hotel housekeepers, waitresses, house maids, and retail clerks. In Nickel and Dimed: On (Not) Getting by in America, Barbara Ehrenreich describes her experience of working a series of low-wage jobs in 1998 and trying to survive on her wages. Among other low-wage jobs, she worked in Wal-Mart, earning $6.00 per hour. In addition to trying to survive on her wages, she described how working overtime without pay was the custom at Wal-Mart. Managers informed workers to punch out of the time clock and to begin some additional work (without pay).
Because of the Great Recession from 2007-2009, the gap between the rich and poor has increased in America. Today, the richest one percent of Americans earn nearly a quarter of the country’s income and control 40 percent of its wealth. The gap between the wealth of white families and the wealth of African-American families and Hispanics has also increased. The average wealth of a white family in 2009 was 20 times greater than that of the average black family, and 18 times greater than the average Hispanic family. In other words, the average white family had $113,149 in net worth, compared to $6,325 for Hispanics and $5,677 for African-Americans.
Let’s take a step back now and see where inequality really began.
Origins of Inequality and private property
The persistence of inequality
Can we eliminate inequality?
Inequality and poverty didn’t just drop down from the sky. The previous section shows it is embedded in society in many ways, but if the conditions that generate social inequality are conscious and intentional creations of human actions, they can be changed. We will examine this complex issue in the next section.
Two classic approaches to stratification provide interesting insights into this phenomenon, structural-functionalism and conflict theories.
Structural-Functionalism on stratification
Conflict Theorists on stratification
Can we make a society in which people are equal?
Societies could be redesigned so that they are based on cooperation. Marxists argue that a central component of a more equitable and humane society would be based on the idea, “from each according to their ability, to each according to their need.” In other words, each person should produce or work to the best of their ability according to their talents, and each person should receive the fruits of this labor according to their need, irrespective of what they have produced. Conflict theorists also believe that democracy or some sort of group decision making is more humane and more effective. Stratification systems that concentrate decision making and power in the hands of a few are destined to not serve the interests of most people at the bottom of the pyramid.
Economic class, in conjunction with race and gender, shape the opportunities, the privileges, and the inequalities experienced for individuals and groups. The United States continues to be greatly stratified along these three lines. This was seen when Hurricane Katrina hit in 2005. In New Orleans, the roles of class, race, and gender were made apparent to the U.S. public. Many of the televised images showed poor, African Americans, many who were women and their children, abandoned in the storm, without resources for several days and without basic necessities of food and water. Though the storm displaced hundreds of people from all backgrounds, classes, colors and gender ‘equally,’ all were not affected the same. The wealthy had cars to leave New Orleans, and credit cards and bank accounts for emergency hotels and supplies. They also had insurance policies for rebuilding. The unequal impact of this tragedy was not unique. In the sinking of the Titanic in 1912 for example, 60% of first class passengers survived, while only 24% of third class passengers survived. One child in first class died, while 49 children in third class died. The poor in the U.S. and around the world are most likely to suffer from ‘natural’ and human tragedies.
Class plays an important role in the forms of privilege and oppression. Capitalism produces enormous amounts of wealth, in addition to increasing levels of inequality, both within the U.S. and around the world. These inequalities result from a class system based on increasing gaps in income, wealth, and power between the few people on top and the masses of people at the bottom. Capitalism is a system that produces cruel consequences. For those at the bottom, the costs are great, with living conditions among the poor comparable to those found in developing countries. Capitalism causes competition, stress, and anxiety among members of the working class and middle class, as people do not have any control over their work and whether they can keep their jobs. Despite the myth that hard work leads to getting ahead and making it, for the most part people have little power to improve their class position. Research shows people are as likely to move downward as they are upward in the class system. Currently, corporate downsizing, the loss of industrial jobs going overseas, the expansion of low-paying service occupations, and the Great Recession beginning in 2007, have combined to result in many people struggling to keep the jobs that they have, rather than being able to move upward.
The increase of immigrants in the U.S. and the loss of jobs to other countries illustrate a core belief that the greatest problem American workers experience is unfair competition from immigrants in the U.S. and workers abroad. The belief, ‘they are taking our jobs’ ignores the capitalist system itself, which by its nature increases the wealth of the few owners by controlling workers and keeping wages as low as possible, and it allows a few owners to control the majority of wealth, leaving a tiny share to be distributed among everyone else.
Capitalism also takes advantage of gender inequality. Women workers are exploited for cheap labor as nannies and maids in New York, in clothing sweatshops in Los Angeles, and on rose farms in Ethiopia which pay women a dollar a day. Wal-Mart, the nation’s largest private employer, in particular has been accused of discrimination against women. Although over seventy percent of Wal-Mart’s hourly workers are female, they account for only a third of all management. Wal-Mart has been sued for unfair practices in the training, payment, and promotion of its female employees.
Capitalism would no longer function if the mass of women stopped doing the shopping, cooking, cleaning, and caretaking. For the most part, women raise the next generation of workers on which capitalism depends. Women do this work without receiving any extra pay or benefits.
We now turn to three levels of analysis on stratification.
Micro-level prestige and influence
Meso-level access to resources
Macro-level factors influencing stratification
Consequences of Inequality
Almost half the world, over 3 billion people, lives on less than $2.50 a day. 78 percent of Ethiopians earn less than $2.00 a day. (See Pictures 4, 5, and 6). 86% of the population in Zambia lives in poverty while 4% of the population in Belgium lives in poverty. Why is there such a large gap between Zambia and Belgium? Why are the poor still poor? Who is to blame for poverty? Social scientists offer a few theories which help to explain the causes and consequences of global inequality.
Development and Modernization
World Systems Theory
The New International Division of Labor theory
There are many consequences that result from inequality, including reduced access to education and limited social capital. Inequality also appears to affect physical abilities. For instance, children raised in less affluent homes have a reduced likelihood of reaching their cognitive potential due to their access to limited resources. In other words, children in poorer homes are sometimes less intelligent not because they do not have the same genetic intelligence but because their resource-poor environment limits their genetic potential.
The origins of inequality can be found in the transition from hunter/gatherer societies to horticultural/pastoralist societies. Here, it might be useful to describe a few characteristics of these societies.
In hunter/gather societies, (around 50,000 B.C.), small groups of people gathered what they could find and they also hunted and fished. People grew and collected their food for all of their needs. There was very little trading between the groups and there were not many inequalities between groups. There was not a surplus of goods. Everyone possessed basically the same as everyone else. The division of labor was small. People did almost the same jobs as each other. Food gathering and food production was the focus of work.
In horticultural/pastoralist societies (around 12,000 B.C.), groups grew to be very large and humans settled down in one place. For the first time, people had more time to do other work besides producing food, such as making leather and making weapons and other special skills. This new division of labor led to surplus of goods. The groups then traded with each other. This led to inequality because some people accumulated more possessions than others.
Fast forward many millennia later to just before Industrialization began. In the 14th, 15th, and 16th centuries, there was collectivity in the space and land in Europe. Life was brutal and harsh, but there was a joint and shared responsibility in the way people lived their lives and went about their work. People farmed land in a collective way because they saw it as something for everyone to take care of and for everyone to enjoy the fruits of their labor.
The concept of private taking and private property began to flourish in the late 15th century beginning in Europe and spreading around the world. Jean-Jacques Rousseau linked private property with inequality in his book, Discourse on Inequality. Collective land and space, once shared by all, began to be divided up into private takings and private ownership (and this continues today). Land, oceans, and air, once shared by everyone in the world, began being bought and sold like products in a store. The great land masses of the world were reduced to private property. Laws and regulations were created that allowed a country to claim a certain amount of water for exploitation. Air was divided into air corridors that were bought and sold for commercial traffic for airplanes. Today, the right to private property is an important value in most societies. With deregulation, privatization, and free trade, we continue to see a private taking and private ownership of entities once shared by everyone.
The idea that there should be equality in society emerged in the 17th and 18th centuries in the writings of Hobbes and Locke. Their thinking helped people consider that inequality was the result of the actions and intentions of social institutions and specific groups and not the will of God. Even so, the question of the origin of inequality remains today in addition to why inequality continues.
Sociologist Joel Charon offers a few reasons why inequality continues in society. His arguments reflect social reproduction theory, which focuses on the roles of institutions and cultures in the perpetuation of inequality and the process by which the social class structure is maintained. First, the rich and powerful protect the system of inequality. They are typically the owners of the means of production (factories, machinery, land, transportation) and have the resources to protect themselves and their positions. Karl Marx argued that the rich and powerful have control over the means of production, which is economic power, and they also have great influence on government power, including the rules governments follow, the people who work for the government, and the laws governments make. The rich and powerful also have control over the media, the schools, the courts, and many other parts of society and they support institutions (religion, economy, and education) that favor them. For example, the Walton family made $3.2 million in political contributions in 2004. The Waltons have great economic power and also the ability to influence government through large donations to political actors that protect their positions and their businesses. Inequality continues because those at the top protect their positions and use their power to influence other parts of society.
Second, culture teaches the acceptance of inequality. Research shows that Americans believe in equality. Research also shows that Americans view inequality as justified. One belief system that people commonly embrace--mistakenly, according to contemporary economic research-- is that the rich and powerful are more talented, hardworking, and intellectually superior and thus more deserving. The poor are poor because they are lazy or irresponsible or unmotivated. If they can’t make it, it is their fault. These are ideologies that protect the system of inequality. These ideologies legitimatize the position of the rich and powerful and explain and justify the position of the poor. People tend to accept inequality, not because they are happy with their situation, but because over time people believe their situation is natural and normal and it is what they expect from life.
In the United States, important cultural values are taught early on which support the system of inequality. These include a focus on the individual, a value of hard work, measurable achievement, and the ‘sacred’ ideal of equal opportunity. People accept as truth these beliefs: ‘If you work hard, you can rise to the top.’ ‘You can be anything you want to be.’ ‘Where there’s a will, there’s a way.’ ‘America is the land of opportunity; anyone can make it if they try hard enough.’ ‘Work hard, get an education, and don’t give up when the going gets rough.’ These values support the inequality that already exists and these values deny the impacts of inequality.
The American dream contains the belief that every individual can achieve prosperity and success through hard work and self discipline. For example, U.S. President Barack Obama said in a speech in 2005, “…whether chance of birth or circumstance decides life’s big winners and losers, or whether we build a community where, at the very least, everyone has a chance to work hard, get ahead, and reach their dreams.” While these might be inspirational words, they focus on the individual and leave out the social structural causes of inequality and poverty, such as the high unemployment rate, inheritance laws that allow families to pass on wealth, lack of state supported child care or health care, and tax policies that favor the wealthy. As a startling example, the Walton family received a federal tax cut of $91,500 per hour during the 2004 tax year.
Third, people are socialized to accept their position in life. The rich and powerful socialize their children to expect wealth and power. Parents, teachers, and friends show us our position in society and teach us to expect that same level. Parents who attended prestigious boarding schools and Ivy League colleges such as Harvard, Yale, or Princeton expect their future offspring will attend such schools. Harvard researcher Michael Hurwitz found that legacy students were 45% more likely to be admitted to elite colleges.
Ivy League colleges, private country clubs, debutante balls (a formal introduction and presentation of young women to society) and the social register (a book listing the most important and famous American families) are ways that the wealthy maintain their cohesion and pass on their prestigious positions to their children.
In addition, education helps to reinforce an acceptance of inequality and education prepares each social class differently, depending on the roles they will play when leaving school. This means teaching the appropriate skills but also the appropriate values for each social class. Elementary and high schools in the U.S., in particular, teach different values to different social classes. Working-class students learn obedience; upper-middle class students learn leadership and creativity. Upper-middle class students participate in activities that focus on public performance and skill development. Working-class students participate in informal play, visiting family, and ‘hanging out.’ Socialization brings the acceptance of a culture that justifies inequality, and it normally brings an acceptance of one’s relative position in the system of inequality.
Finally, police, courts, and prisons work together to protect the system of inequality. Research has shown that the criminal justice system in the U.S. is biased against the poor from start to finish, from the definition of what constitutes a crime through the process of arrest, trial, and sentencing.
Much of society seems to encourage and protect the system of inequality. Given all of these ways inequality is perpetuated in a society, is it at all possible to eliminate it?
The structural-functional approach to stratification asks the same question that it does of the other components of society: What function or purpose does it serve? The answer is that all parts of society, even poverty, contribute in some way to the larger system’s overall stability, according to this theory. Stratification and inequalities are inevitable and beneficial to society. The layers (stratification) are the inevitable sorting of unequal people. The layering is useful because it ensures that the best people are at the top and those who are less worthy are further down the pyramid and therefore have less power and are given fewer rewards than the high quality people at the top. Inequality ensures that the most functionally important jobs are filled by the best qualified people. In other words, it makes sense for the CEO of a company whose position is more important functionally to make more money than a janitor working for the same company. A job’s functional importance is determined by the degree to which the job is unique, meaning whether few other people can perform the same function adequately. Garbage collectors are important to public sanitation, but do not need to be rewarded highly, because little training or talent is required to perform their job. Doctors should be rewarded highly, because great training is required to do their job. It is logical that society must offer greater rewards (e.g., income, vacations, promotion) to motivate the most qualified people to fill the most important positions.
There are several obvious problems with this approach to stratification. First, it is difficult to determine the functional importance of any job, as the accompanying specialization and inter-dependence make every position necessary to the overall operation. The engineers in a factory for example are just as important as the other workers in the factory to the success of a project. In another example, a primary school teacher in the U.S. earns $29,000 per year whereas a National Basketball Association Player can earn as much as $21 million per year. Are basketball players more essential to society than teachers? Are basketball players more functionally important than teachers? In 2009, comedian Jerry Seinfeld earned $85 million. Do his earnings demonstrate his contribution to society? If NBA players or famous comedians went on strike and decided not to work, most people would not notice. However, if teachers, bus drivers, nurses, cleaners, garbage collectors, or waitresses stopped working, society would close down. There is little connection between income and jobs that are functionally important in a society.
Second, this approach assumes that the system of stratification is fair and rational, and that the ‘best’ people end up on top because of their superiority. But, in real life, the system does not work so easily or perfectly. Former U.S. president George W. Bush, for example, was not the smartest or most politically talented individual but he was well connected and born at the top of the stratification system (white, male, wealthy, American), and therefore was elected to a position with great power – the U.S. presidency.
Another problem with this approach is that it assumes that only a few ‘chosen’ people should have all the power and all the material wealth, rather than distributing it equitably, or distributing it to those who need it most.
Conflict theorists argue that stratification is dysfunctional and harmful in society. Stratification benefits the rich and powerful at the expense of the poor. For example, many wealthy families pay low wages to nannies to care for their children, gardeners to attend to their rose gardens and maids to pick up their dirty socks. Capitalism, in particular, benefits the rich. Corporate welfare is one example where an arrangement of direct subsidies, tax breaks, and other support that the government has created for big businesses. As mentioned previously, the Walton family receives enormous tax breaks. Inequality is inevitable within a system that has individual competition at its core, and therefore, ‘winners’ and ‘losers.’ Conflict theorists believe that this competitive system, together with the way the game is ‘fixed’, ends up creating and perpetuating stratification systems. Competition and inequality are not inevitable but are created and maintained by people.
Functionalists criticize this approach by arguing that people do not always act largely out of economic self-interest. For example, Chuck Feeney, the creator of Duty Free Shoppers, has given $4 billion to charities. Bill Gates has given 58% of his wealth to charity. In contrast, the Walton family has given less than 1% of their wealth to charity.
Functionalists also argue that conflict theorists underestimate people’s ability to move upward in society. They argue that if people really want to succeed, they can do so through hard work.
Stratification occurs in small groups and face-to-face interaction. At the beginning of the chapter, we looked at how stratification begins when we are young. Clothing, expensive toys, a new bicycle, a fancy car, and what job our parents had were symbols that differentiated us and separated us in elementary schools and impacted which friends we played with and the interactions we had.
Throughout our lives, wealth, power, and prestige are given to individuals who have knowledge and access to important information and influential people in society. Gender and race influence our degrees of networks, as well as individual qualities of leadership, self confidence, and physical attractiveness.
Our positions and connections in organizations and institutions lie within the stratification system. This impacts how we experience life and how we interact with other individuals and groups. As mentioned earlier, parents, teachers, and friends show us our position in society and teach us to expect that same level. Education prepares each social class differently, with different skills and values taught to each class. The police, courts, and prisons reinforce the stratification system. Our position in the system reflects the type of health care we receive. All of these institutions support the stratification system by favoring the rich and powerful.
Our position in the international economic system in the world shapes our opportunities throughout our life and our access to important resources. Let’s look at Cape Verde as an example. Located 450 kilometers off the coast of Senegal, West Africa, it is geographically and economically isolated from the rest of the world. (See Picture 1.) About one quarter of the population remains unemployed and an additional 26 percent are underemployed. The poverty rate in 2003 was 37 percent including 20 percent who are identified as extremely poor. The country is ranked 38 in the Human Poverty Index by the United Nations. Cape Verde is a debtor nation with a total external debt of $360 million at the end of 2002. Besides being in debt to countries of the North, (the U.S. and Europe) Cape Verde is part of the global economy in other ways. The country’s major industries are owned by nations of the North. Forty-nine percent of the banks, hotels, airlines and shipping lines formerly owned by the Cape Verdean government have been sold to foreign investors. In addition, Portuguese investors own forty percent of the state telecommunications company. The dominance of foreigner investors in even the industries that supply the most basic needs, such as water, are a result of policies of privatization, a key element of neoliberal and Washington Consensus economic “reforms.” (See Picture 2). And the International Monetary Fund continues to push its privatization drive demanding that Cape Verde privatize its few remaining public enterprises, including the national airlines, the national oil supply company, the national transportation company, and others. Both the country’s private sector business class and low-income households have been greatly impacted, experiencing job loss and price hikes. On one island, the increased cost of privatized energy has forced people who cannot afford the electricity to return to traditional oil lamps. Low-income women especially have been impacted from structural adjustment programs, which have cut governmental provisions on health, education, and food. Macro level factors such as policies implemented by the World Bank and International Monetary fund impact the job opportunities, prices of food, water, and electricity, and day to day life for citizens in Cape Verde and around the world. (See picture 3).
In the opening paragraph and throughout this chapter, we presented many consequences of inequality. What class we belong to directly relates to our individual life chances. Let’s look at a few more points in detail. The wealthy and well-educated are much more likely to be in good health, and have access to good medical care than the poor. The poor have shorter life expectancies and are at greater risk for chronic illnesses. Children born into poor families are at much greater risk of dying during their first year of life from disease, accidents, or violence. When medical attention is needed, its huge cost prevents the poor from seeking care. For many, the high cost of insurance prevents Americans from having good medical care. Approximately 45.7 million people in the United States were without health insurance coverage in 2007. Hunger is also connected to class. It is estimated that 13 percent of children under age 12 are hungry or at risk of being hungry. Among the working poor, almost 75% of the children are thought to be in this category. In addition to the high cost of food, the lack of affordable housing is a critical concern for poor families and many families live in weekly hotels which are cheaper than paying monthly rent. In Orange County, California, low-income parents working full-time at Disneyland for $9 per hour still struggle to find affordable housing, so parents with young children live across the street in motels.
The poor are not able to provide the same educational opportunities for their children as the wealthy are. School districts in wealthy suburban areas tend to pay higher teachers’ salaries, have newer buildings, and provide sophisticated equipment. Students in central city schools and poverty stricken rural areas often attend rundown schools that lack necessary equipment and teaching materials.
Private tutors, SAT preparation courses, and charter schools allow children from wealthy families to gain entrance to elite colleges and find jobs more quickly after graduation. And, as previously mentioned, their children have a good chance of attending the same elite college. The cycle of wealth, power, and prestige continues.
This theory blames tradition for global poverty. These theorists argue poor societies stay poor because they hold onto traditional attitudes and beliefs, technologies and institutions, such as traditional economic systems and forms of government. In contrast, in the modern world, the rise of capitalism brought modern attitudes, modern technologies such as machinery and electronics, and modern institutions which helped countries progress and have a higher standard of living. Given enough time, modernization will occur everywhere in the world. Eventually global capitalism and its modern corporations will hold these modern ideas, technical innovations, and efficient institutions everywhere. Modernists believe large economic growth is the key to reducing poverty in poor countries.
This theory blames colonialism and neocolonialism (continuing economic dependence on former colonial countries) for global poverty. Countries have developed at an uneven rate because wealthy countries have exploited poor countries in the past and today through foreign debt and transnational corporations (TNCs). Historically, wealthy nations have taken a great quantity of materials from poor countries such as minerals and metals necessary to make automobiles, weapons, and jewelry in wealthy countries. In addition, large amounts of agricultural products that can only be grown in the hot climates of the poor countries have been taken and exported and manufactured in the wealthy countries such as coffee, tea, sugar, and cocoa. Wealthy countries would not be as rich as they are today if they did not have these materials. Also, wealthy countries increased their own profits by organizing cheap labor through slavery. King Leopold II, for example, who was King of Belgium from 1865-1909, forced hundreds of thousands of men, women, and children to work as slaves in the Democratic Republic of Congo. The invention of the bicycle tire in the 1890’s and later the automobile tire meant that rubber was in high demand, and wild rubber vines were widespread in the Congo, earning Leopold millions. The Democratic Republic of Congo is still suffering today from the plunder of resources and the torture and killing of millions during Leopold’s rule.
Dependency theorists believe large economic growth is not necessarily the key to reducing poverty and developing. Instead, poor countries are trapped by large debts which prevent them from developing. For example, between 1970 and 2002, the continent of Africa received $540 billion in loans from wealthy nations and from the World Bank and IMF. African countries have paid back $550 billion of their debt but they still owe $295 billion. The difference, of course, is a result of compound interest. Countries cannot focus on economic or human development when they are constantly paying off debt. In addition, economic relationships between countries tend to benefit the wealthier countries. Some of the land in Cape Verde for example, could be planted and harvested to feed people but is planted instead with cash crops for foreign exchange. Due to indebtedness and foreign dependence, fresh produce is regularly sold or changed to a nonperishable type such as canned tuna for export rather than consumed by the population. Widespread malnutrition is one of the effects of this foreign dependency. This is common around the globe. Brazil is the second largest exporter of agricultural products but 50% of its population is malnourished. Although Ethiopia has one of the largest populations of cattle in Africa, and much of the population suffers from malnutrition, the government continues to export large numbers of cattle to the Middle East. Even during the peak of the infamous 1985 famine, the government was sending dried meat to Egypt. Sen and Grown call this denationalizing of the Global South. Foreign trade and business get in the way of the freedom of local governments.
This theory, similar to dependency theory, suggests that wealthy countries benefit from other countries and also exploit their citizens. This reflects Immanuel Wallerstein’s theory that suggests that how a country is integrated into the capitalist world system is the key feature in determining how economic development takes place in that country. The world economy is a system divided into a hierarchy of three types of countries: core, semiperipheral, and peripheral. Core countries (e.g., U.S., Japan, Germany) are dominant capitalist countries characterized by high levels of industrialization and urbanization. Semiperipheral countries (e.g., South Korea, Taiwan, Mexico, Brazil, India, Nigeria, South Africa) are less developed than core nations but are more developed than peripheral nations. Peripheral countries (e.g., most African countries and low income countries in South America) are dependent on core countries for capital, and have very little industrialization and urbanization. Core countries own most of the world’s capital and technology and have great control over world trade and economic agreements. Semiperipheral countries generally provide labor and materials to core countries. Semiperipheral countries exploit peripheral countries, just as core countries exploit both semiperipheral and peripheral countries. Core countries extract raw materials with little cost. They can also set the prices for the agricultural products that peripheral countries export regardless of market prices, forcing small farmers to abandon their fields because they can’t afford to pay the labor and fertilizer. The wealthy in peripheral countries benefit from the labor of poor workers and from their own economic relations with core country capitalists.
Based on the changing nature of the world economy, production is divided into small pieces, each of which can be moved by a Transnational Corporation (TNC) to any country in the world that can provide the best deal on capital and labor. When moving businesses and factories to cheap labor locations, effort is not made to create better quality of living and development projects in poor countries. Strict laws protecting the environment and the rights of workers, which must be followed in the U.S. and Europe, do not have to be followed in many poor countries. This is attractive for a TNC so that bottom line profits can increase. In many factories, workers are often exploited by low wages, long hours, and poor working conditions. Wal-Mart factory workers in China report working fourteen hours a day for less than $3 per day in a hot room with only one fan. One woman described that rent was deducted from her wages, even after she moved out of the dormitory. During official inspection from outsiders, workers were taught how to lie about working hours and how to present fake pay slips. Workers also faced the threat of physical harassment from managers. There are about 65,000 TNCs across the world today. TNCs continue to take materials and cheap labor from underdeveloped countries.
As long as someone has been making and distributing goods or services, there has been some sort of economy; economies grew larger as societies grew and became more complex. The ancient economy was mainly based on subsistence farming. According to Herodotus, and most modern scholars, the Lydians were the first people to introduce the use of gold and silver coin. It is thought that these first stamped coins were minted around 650-600 BC.
For most people the exchange of goods occurred through social relationships. There were also traders who bartered in the marketplaces. The Babylonians and their city state neighbors developed economic ideas comparable to those employed today. They developed the first known codified legal and administrative systems, complete with courts, jails, and government records.
Several centuries after the invention of cuneiform, the use of writing expanded beyond debt/payment certificates and inventory lists to be applied for the first time, about 2600 BC, to messages and mail delivery, history, legend, mathematics, and astronomical records. Ways to divide private property, when it is contended, amounts of interest on debt, rules as to property and monetary compensation concerning property damage or physical damage to a person, fines for 'wrong doing', and compensation in money for various infractions of formalized law were standardized for the first time in history.
In Medieval times, what we now call economy was not far from the subsistence level. Most exchange occurred within social groups. On top of this, the great conquerors raised venture capital to finance their land captures. The capital investment would be returned to the investor when goods from the newly discovered or captured lands were returned by the conquerors. The discoveries of Marco Polo (1254-1324), Christopher Columbus (1451-1506) and Vasco de Gama (1469-1524) set the foundations for a global economy. The first enterprises were trading establishments. In 1513 the first stock exchange was founded in Antwerpen.
The European captures became branches of the European states, the so-called "colonies". The rising nation-states Spain, Portugal, France, Great Britain, and the Netherlands tried to control the trade through custom duties and taxes in order to protect their national economy. Mercantilism was a first approach to intermediate between private wealth and public interest.
The first economist in the true meaning of the word was the Scotsman Adam Smith (1723-1790). He defined the elements of a national economy: products are offered at a natural price generated by the use of competition - supply and demand - and the division of labour. He maintained that the basic motive for free trade is human self interest. In Europe, capitalism (see below) started to replace the system of mercantilism and led to economic growth. The period today is called the industrial revolution because the system of production and division of labour enabled the mass production of goods.
Capitalism is an economic and social system in which capital and the non-labor factors of production or the means of production are privately controlled; labor, goods and capital are traded in markets; profits are taken by owners or invested in technologies and industries; and wages are paid to labor.
Capitalism as a system developed incrementally from the 16th century on in Europe, although capitalist-like organizations existed in the ancient world, and early aspects of merchant capitalism flourished during the Late Middle Ages. Capitalism gradually spread throughout Europe and other parts of the world. In the 19th and 20th centuries, it provided the main means of industrialization throughout much of the world.
How capitalism works
Criticisms of Capitalism
There are a number of ways to measure economic activity of a nation. These methods of measuring economic activity include:
The Gross Domestic Product or GDP of a country is a measure of the size of its economy. While often useful, it should be noted that GDP only includes economic activity for which money is exchanged. GDP and GDP per capita are widely used indicators of a country's wealth. The map below shows GDP per capita of countries around the world:
An informal economy is economic activity that is neither taxed nor monitored by a government and is contrasted with the formal economy as described above. The informal economy is thus not included in a government's Gross National Product or GNP. Although the informal economy is often associated with developing countries, all economic systems contain an informal economy in some proportion. Informal economic activity is a dynamic process which includes many aspects of economic and social theory including exchange, regulation, and enforcement. By its nature, it is necessarily difficult to observe, study, define, and measure. The terms "under the table" and "off the books" typically refer to this type of economy. The term black market refers to a specific subset of the informal economy. Examples of informal economic activity include: the sale and distribution of illegal drugs and unreported payments for house cleaning or baby sitting.
The origins of modern markets can be traced back to the Roman Empire and the Islamic Golden Age and Muslim Agricultural Revolution where the first market economy and earliest forms of merchant capitalism took root between the 8th–12th centuries.
A vigorous monetary economy was created by Muslims on the basis of the expanding levels of circulation of a stable high-value currency and the integration of monetary areas that were previously independent. Innovative new business techniques and forms of business organization were introduced by economists, merchants and traders during this time. Such innovations included the earliest trading companies, big businesses, contracts, bills of exchange, long-distance international trade, the first forms of partnerships, and the earliest forms of credit, debt, profit, loss, capital, capital accumulation, circulating capital, capital expenditure, revenue, cheques, promissory notes, trusts, startup companies, savings accounts, pawning, loaning, exchange rates, bankers, money changers, deposits, the w:double-entry bookkeeping system, and lawsuits. Organizational enterprises similar to corporations independent from the state also existed in the medieval Islamic world. Many of these early capitalist concepts were adopted and further advanced in medieval Europe from the 13th century onwards.
The economic system employed between the 16th and 18th centuries is commonly described as mercantilism. This period was associated with geographic discoveries by merchant overseas traders, especially from England, and the rapid growth in overseas trade. Mercantilism was a system of trade for profit, although commodities were still largely produced by non-capitalist production methods. While some scholars see mercantilism as the earliest stage of modern capitalism, others argue that modern capitalism did not emerge until later. For example, Karl Polanyi, noted that "mercantilism, with all its tendency toward commercialization, never attacked the safeguards which protected [the] two basic elements of production - labor and land - from becoming the elements of commerce"; thus mercantilist attitudes towards economic regulation were closer to feudalist attitudes, "they disagreed only on the methods of regulation." Moreover Polanyi argued that the hallmark of capitalism is the establishment of generalized markets for what he referred to as the "fictitious commodities": land, labor, and money. Accordingly, "not until 1834 was a competitive labor market established in England, hence industrial capitalism as a social system cannot be said to have existed before that date."
The commercial stage of capitalism begin with the founding of the British East India Company and the Dutch East India Company. During this era, merchants, who had traded under the previous stage of mercantilism, invested capital in the East India Companies and other colonies, seeking a return on investment, setting the stage for capitalism.
During the Industrial Revolution, the industrialist replaced the merchant as a dominant actor in the capitalist system and effected the decline of the traditional handicraft skills of artisans, guilds, and journeymen. Also during this period, the surplus generated by the rise of commercial agriculture encouraged increased mechanization of agriculture. Industrial capitalism marked the development of the factory system of manufacturing, characterized by a complex division of labor between and within the work process and the routinization of work tasks.
In the late 19th century, the control and direction of large areas of industry came into the hands of trusts, financiers and holding companies. This period was dominated by an increasing number of oligopolistic firms earning supernormal profits. Major characteristics of capitalism in this period included the establishment of large industrial monopolies; the ownership and management of industry by financiers divorced from the production process; and the development of a complex system of banking, an equity market, and corporate holdings of capital through stock ownership. Inside these corporations, a division of labor separates shareholders, owners, managers, and actual laborers.
By the last quarter of the 19th century, the emergence of large industrial trusts had provoked legislation in the US to reduce the monopolistic tendencies of the period. Gradually, during this era, the US government played a larger and larger role in passing antitrust laws and regulation of industrial standards for key industries of special public concern. By the end of the 19th century, economic depressions and boom and bust business cycles had become a recurring problem. In particular, the Long Depression of the 1870s and 1880s and the Great Depression of the 1930s affected almost the entire capitalist world, and generated discussion about capitalism’s long-term survival prospects. During the 1930s, Marxist commentators often posited the possibility of capitalism's decline or demise, often in contrast to the ability of the Soviet Union to avoid suffering the effects of the global depression.
In his book The Protestant Ethic and the Spirit of Capitalism (1904-1905), Max Weber sought to trace how a particular form of religious spirit, infused into traditional modes of economic activity, was a condition of possibility of modern western capitalism. For Weber, the 'spirit of capitalism' was, in general, that of ascetic Protestantism; this ideology was able to motivate extreme rationalization of daily life, a propensity to accumulate capital by a religious ethic to advance economically, and thus also the propensity to reinvest capital: this was sufficient, then, to create "self-mediating capital" as conceived by Marx. This is pictured in Proverbs 22:29, “Seest thou a man diligent in his calling? He shall stand before kings” and in Colossians 3:23, "Whatever you do, do your work heartily, as for the Lord rather than for men." In the Protestant Ethic, Weber further stated that “moneymaking – provided it is done legally – is, within the modern economic order, the result and the expression of diligence in one’s calling…” And, "If God show you a way in which you may lawfully get more than in another way (without wrong to your soul or to any other), if you refuse this, and choose the less gainful way, you cross one of the ends of your calling, and you refuse to be God's steward, and to accept His gifts and use them for him when He requierth it: you may labour to be rich for God, though not for the flesh and sin" (p. 108).
The economics of capitalism developed out of the interactions of the following five items:
1. Commodities: There are two types of commodities: capital goods and consumer goods. Capital goods are products not produced for immediate consumption (i.e. land, raw materials, tools machines and factories), but serve as the raw materials for consumer goods (i.e. televisions, cars, computers, houses) to be sold to others.
2. Money: Money is primarily a standardized means of exchange which serves to reduce all goods and commodities to a standard value. It eliminates the cumbersome system of barter by separating the transactions involved in the exchange of products, thus greatly facilitating specialization and trade through encouraging the exchange of commodities.
3. Labour power: Labour includes all mental and physical human resources, including entrepreneurial capacity and management skills, which are needed to transform one type of commodity into another.
4. Means of production: All manufacturing aids to production such as tools, machinery, and buildings.
5. Production: The act of making goods or services through the combination of labour power and means of production.
Individuals engage in the economy as consumers, labourers, and investors, providing both money and labour power. For example, as consumers, individuals influence production patterns through their purchase decisions, as producers will change production to produce what consumers want to buy. As labourers, individuals may decide which jobs to prepare for and in which markets to look for work. As investors they decide how much of their income to save and how to invest their savings. These savings, which become investments, provide much of the money that businesses need to grow.
Business firms decide what to produce and where this production should occur. They also purchase capital goods to convert them into consumer goods. Businesses try to influence consumer purchase decisions through marketing as well as the creation of new and improved products. What drives the capitalist economy is the constant search for profits (revenues minus expenses). This need for profits, known as the profit motive, ensures that companies produce the goods and services that consumers desire and are able to buy. In order to be successful, firms must sell a certain quantity of their products at a price high enough to yield a profit. A business may consequently lose money if sales fall too low or costs are incurred that are too high. The profit motive also encourages firms to operate efficiently by using their resources in the most productive manner. By using less materials, labour or capital, a firm can cut its production costs which can lead to increased profits.
Following Adam Smith, Karl Marx distinguished the use value of commodities from their exchange value in the market. Capital, according to Marx, is created with the purchase of commodities for the purpose of creating new commodities with an exchange value higher than the sum of the original purchases. For Marx, the use of labor power had itself become a commodity under capitalism; the exchange value of labor power, as reflected in the wage, is less than the value it produces for the capitalist. This difference in values, he argues, constitutes surplus value, which the capitalists extract and accumulate. The extraction of surplus value from workers is called exploitation. In his book Capital, Marx argues that the capitalist mode of production is distinguished by how the owners of capital extract this surplus from workers: all prior class societies had extracted surplus labor, but capitalism was new in doing so via the sale-value of produced commodities. Marx argues that a core requirement of a capitalist society is that a large portion of the population must not possess sources of self-sustenance that would allow them to be independent, and must instead be compelled, in order to survive, to sell their labor for a living wage. In conjunction with his criticism of capitalism was Marx's belief that exploited labor would be the driving force behind a revolution to a socialist-style economy. For Marx, this cycle of the extraction of the surplus value by the owners of capital or the bourgeoisie becomes the basis of class struggle. This argument is intertwined with Marx's version of the labor theory of value asserting that labor is the source of all value, and thus of profit. How capitalists generate profit is illustrated in the figure below.
The market is a term used by economists to describe a central exchange through which people are able to buy and sell goods and services. In a capitalist economy, the prices of goods and services are controlled mainly through supply and demand and competition. Supply is the amount of a good or service produced by a firm and available for sale. Demand is the amount that people are willing to buy at a specific price. Prices tend to rise when demand exceeds supply and fall when supply exceeds demand, so that the market is able to coordinate itself through pricing until a new equilibrium price and quantity is reached. Competition arises when many producers are trying to sell the same or similar kinds of products to the same buyers. Competition is important in capitalist economies because it leads to innovation and more reasonable prices as firms that charge lower prices or improve the quality of their product can take buyers away from competitors (i.e., increase market share. Furthermore, without competition, a monopoly or cartel may develop. A monopoly occurs when a firm supplies the total output in the market. When this occurs, the firm can limit output and raise prices because it has no fear of competition. A cartel is a group of firms that act together in a monopolistic manner to control output and raise prices. Many countries have competition and anti-trust laws that prohibit monopolies and cartels from forming. In many capitalist nations, public utilities (communications, gas, electricity, etc), are able to operate as a monopoly under government regulation due to high economies of scale.
Income in a capitalist economy depends primarily on what skills are in demand and what skills are currently being supplied. People who have skills that are in scarce supply are worth a lot more in the market and can attract higher incomes. Competition among employers for workers and among workers for jobs, helps determine wage rates. Firms need to pay high enough wages to attract the appropriate workers; however, when jobs are scarce workers may accept lower wages than when jobs are plentiful. Labour unions and the government also influence wages in capitalist nations. Unions act to represent labourers in negotiations with employers over such things as wage rates and acceptable working conditions. Most countries have an established minimum wage and other government agencies work to establish safety standards. Unemployment is a necessary component of a capitalist economy to insure an excessive pool of laborers. Without unemployed individuals in a capitalist economy, capitalists would be unable to exploit their workers because workers could demand to be paid what they are worth. What's more, when people leave the employed workforce and experience a period of unemployment, the longer they stay out of the workforce, the longer it takes to find work and the lower their returning salaries will be when they return to the workforce. Thus, not only do the unemployed help drive down the wages of those who are employed, they also suffer financially when they do return to the paid workforce.
In capitalist nations, the government allows for private property and individuals are allowed to work where they please. The government also generally permits firms to determine what wages they will pay and what prices they will charge for their products. The government also carries out a number of important economic functions. For instance, it issues money, supervises public utilities and enforces private contracts. Laws, such as policy competition, protect against competition and prohibit unfair business practices. Government agencies regulate the standards of service in many industries, such as airlines and broadcasting, as well as financing a wide range of programs. In addition, the government regulates the flow of capital and uses things such as the interest rate to control factors such as inflation and unemployment.
Critics argue that capitalism is associated with the unfair distribution of wealth and power; a tendency toward market monopoly or oligopoly (and government by oligarchy); imperialism, counter-revolutionary wars and various forms of economic and cultural exploitation; repression of workers and trade unionists, and phenomena such as social alienation, economic inequality, unemployment, and economic instability. Critics have argued that there is an inherent tendency towards oligopolistic structures when laissez-faire laws are combined with capitalist private property. Capitalism is regarded by many socialists to be irrational in that production and the direction of the economy are unplanned, creating inconsistencies and internal contradictions and thus should be controlled through public policy.
In the early 20th century, Vladimir Lenin argued that state use of military power to defend capitalist interests abroad was an inevitable corollary of monopoly capitalism. Economist Branko Horvat states, "it is now well known that capitalist development leads to the concentration of capital, employment and power. It is somewhat less known that it leads to the almost complete destruction of economic freedom." Ravi Batra argues that excessive income and wealth inequalities are a fundamental cause of financial crisis and economic depression, which will lead to the collapse of capitalism and the emergence of a new social order.
Environmentalists have argued that capitalism requires continual economic growth, and will inevitably deplete the finite natural resources of the earth, and other broadly utilized resources. Murray Bookchin has argued that capitalist production externalizes environmental costs to all of society, and is unable to adequately mitigate its impact upon ecosystems and the biosphere at large. Labor historians and scholars, such as Immanuel Wallerstein have argued that unfree labor — by slaves, w:indentured servants, prisoners, and other coerced persons — is compatible with capitalist relations.
A common response to the criticism that capitalism leads to inequality is the argument that capitalism also leads to economic growth and generally improved standards of living. Capitalism's does promote economic growth, as measured by Gross Domestic Product or GDP), capacity utilization or standard of living. This argument was central, for example, to Adam Smith's advocacy of letting a free market control production and price, and allocate resources. Many theorists have noted that this increase in global GDP over time coincides with the emergence of the modern world capitalist system.
While the measurements are not identical, proponents argue that increasing GDP (per capita) is empirically shown to bring about improved standards of living, such as better availability of food, housing, clothing, and health care.
Socialism refers to various theories of economic organization advocating public or direct worker ownership and administration of the means of production and allocation of resources, and a society characterized by equal access to resources for all individuals with a method of compensation based on the amount of labor expended. Most socialists share the view that capitalism unfairly concentrates power and wealth among a small segment of society that controls capital and derives its wealth through w:exploitation, creates an unequal society, does not provide equal opportunities for everyone to maximise their potentialities and does not utilise technology and resources to their maximum potential nor in the interests of the public.
Description of Socialism
Examples of Socialism
Criticisms of Socialism
Criticisms of socialism range from claims that socialist economic and political models are inefficient or incompatible with civil liberties to condemnation of specific socialist states. In the economic calculation debate, classical liberal Friedrich Hayek argued that a socialist command economy could not adequately transmit information about prices and productive quotas due to the lack of a price mechanism, and as a result it could not make rational economic decisions. Ludwig von Mises argued that a socialist economy was not possible at all, because of the impossibility of rational pricing of capital goods in a socialist economy since the state is the only owner of the capital goods. Hayek further argued that the social control over distribution of wealth and private property advocated by socialists cannot be achieved without reduced prosperity for the general populace, and a loss of political and economic freedoms.
The People's Republic of China, North Korea, Laos and Vietnam are Asian states remaining from the first wave of socialism in the 20th century. States with socialist economies have largely moved away from centralised economic planning in the 21st century, placing a greater emphasis on markets, as in the case of the Chinese Socialist market economy and Vietnamese Socialist-oriented market economy. In China, the Chinese Communist Party has led a transition from the command economy of the Mao period to an economic program called the socialist market economy or "socialism with Chinese characteristics." Under Deng Xiaoping, the leadership of China embarked upon a program of market-based reform that was more sweeping than had been Soviet leader Mikhail Gorbachev's perestroika program of the late 1980s. Deng's program, however, maintained state ownership rights over land, state or cooperative ownership of much of the heavy industrial and manufacturing sectors and state influence in the banking and financial sectors.
Elsewhere in Asia, some elected socialist parties and communist parties remain prominent, particularly in India and Nepal. The Communist Party of Nepal in particular calls for multi-party democracy, social equality, and economic prosperity. In Singapore, a majority of the GDP is still generated from the state sector comprised of government-linked companies. In Japan, there has been a resurgent interest in the Japanese Communist Party among workers and youth.
In Europe, the Left Party in Germany has grown in popularity, becoming the fourth biggest party in parliament in the general election on 27 September 2009. Communist candidate Dimitris Christofias won a crucial presidential runoff in w:Cyprus, defeating his conservative rival with a majority of 53% of the vote. In Greece, in the general election on 4 October 2009, the Communist KKE got 7.5% of the votes and the new Socialist grouping, Syriza or "Coalition of the Radical Left", won 4.6% or 361,000 votes.
In Ireland, in the 2009 European election, Joe Higgins of the Socialist Party took one of four seats in the capital Dublin European constituency. In Denmark, the Socialist People's Party more than doubled its parliamentary representation to 23 seats from 11, making it the fourth largest party.
In France, the Revolutionary Communist League candidate in the 2007 presidential election, Olivier Besancenot, received 1,498,581 votes, 4.08%, double that of the Communist candidate. The LCR abolished itself in 2009 to initiate a broad anti-capitalist party, the New Anticapitalist Party, whose stated aim is to "build a new socialist, democratic perspective for the twenty-first century".
In some Latin American countries, socialism has re-emerged in recent years, with an anti-imperialist stance, the rejection of the policies of neoliberalism, and the nationalisation or part nationalisation of oil production, land and other assets. Venezuelan President Hugo Chávez, Bolivian President Evo Morales, and Ecuadorian president Rafael Correa for instance, refer to their political programs as socialist.
An April 2009 Rasmussen Reports poll conducted during the Financial crisis of 2007–2009 suggested there had been a growth of support for socialism in the United States. The poll results stated that 53% of American adults thought capitalism was better than socialism, and that "Adults under 30 are essentially evenly divided: 37% prefer capitalism, 33% socialism, and 30% are undecided". The question posed by Rasmussen Reports did not define either capitalism or socialism.
In one example of socialism, the Soviet Union, state ownership was combined with central planning. In this scenario, the government determined which goods and services were produced, how they were to be produced, the quantities, and the sale prices. Centralized planning is an alternative to allowing the market (supply and demand) to determine prices and production. In the West, neoclassical liberal economists such as Friedrich Hayek and Milton Friedman said that socialist planned economies would fail because planners could not have the business information inherent to a market economy (cf. economic calculation problem), nor could managers in Soviet-style socialist economies match the motivation of profit. Consequent to Soviet economic stagnation in the 1970s and 1980s, socialists began to accept parts of these critiques. Polish economist Oskar Lange, an early proponent of market socialism, proposed a central planning board establishing prices and controls of investment. The prices of producer goods would be determined through trial and error. The prices of consumer goods would be determined by supply and demand, with the supply coming from state-owned firms that would set their prices equal to the marginal cost, as in perfectly competitive markets. The central planning board would distribute a "social dividend" to ensure reasonable income equality.
In western Europe, particularly in the period after World War II, many socialist parties in government implemented what became known as mixed economies. In the biography of the 1945 UK Labour Party Prime Minister Clement Attlee, Francis Beckett states: "the government... wanted what would become known as a mixed economy". Beckett also states that "Everyone called the 1945 government 'socialist'." These governments nationalised major and economically vital industries while permitting a free market to continue in the rest. These were most often monopolistic or infrastructural industries like mail, railways, power and other utilities. In some instances a number of small, competing and often relatively poorly financed companies in the same sector were nationalised to form one government monopoly for the purpose of competent management, of economic rescue (in the UK, British Leyland, Rolls Royce), or of competing on the world market.
Typically, this was achieved through compulsory purchase of the industry (i.e. with compensation). In the UK, the nationalisation of the coal mines in 1947 created a coal board charged with running the coal industry commercially so as to be able to meet the interest payable on the bonds which the former mine owners' shares had been converted into.
Marxist and non-Marxist social theorists agree that socialism developed in reaction to modern industrial capitalism, but disagree on the nature of their relationship. Émile Durkheim posits that socialism is rooted in the desire to bring the state closer to the realm of individual activity, in countering the anomie of a capitalist society. In socialism, Max Weber saw acceleration of the rationalisation started in capitalism. As a critic of socialism, he warned that placing the economy entirely in the state's bureaucratic control would result in an "iron cage of future bondage".
The Marxist conception of socialism is that of a specific historical phase that will displace capitalism and be a precursor to communism. The major characteristics of socialism are that the proletariat will control the means of production through a workers' state erected by the workers in their interests. Economic activity is still organised through the use of incentive systems and social classes would still exist but to a lesser and diminishing extent than under capitalism. For orthodox Marxists, socialism is the lower stage of communism based on the principle of "from each according to his ability, to each according to his contribution" while upper stage communism is based on the principle of "from each according to his ability, to each according to his need"; the upper stage becoming possible only after the socialist stage further develops economic efficiency and the automation of production has led to a superabundance of goods and services.
Socialism is not a concrete philosophy of fixed doctrine and program; its branches advocate a degree of social interventionism and economic rationalisation (usually in the form of economic planning), sometimes opposing each other. Some socialists advocate complete nationalisation of the means of production, distribution, and exchange; others advocate state control of capital within the framework of a market economy.
Socialists inspired by the Soviet model of economic development have advocated the creation of centrally planned economies directed by a state that owns all the means of production. Others, including Yugoslavian, Hungarian, German and Chinese Communists in the 1970s and 1980s, instituted various forms of market socialism, combining co-operative and state ownership models with the free market exchange and free price system (but not free prices for the means of production).
Social democrats propose selective nationalisation of key national industries in mixed economies, while maintaining private ownership of capital and private business enterprise. Social democrats also promote tax-funded welfare programs and regulation of markets. Many social democrats, particularly in European welfare states, refer to themselves as socialists, introducing a degree of ambiguity to the understanding of what the term means.
Modern socialism originated in the late 18th-century intellectual and working class political movement that criticised the effects of industrialisation and private ownership on society. The utopian socialists, including w:Robert Owen (1771–1858), tried to found self-sustaining communes by secession from a capitalist society. Henri de Saint Simon (1760–1825), the first individual to coin the term socialisme, was the original thinker who advocated technocracy and industrial planning. The first socialists predicted a world improved by harnessing technology and better social organisation; many contemporary socialists share this belief. Early socialist thinkers tended to favour an authentic meritocracy combined with rational social planning.
The Financial crisis of 2007–2009 led to mainstream discussions as to whether "Marx was right". Time magazine ran an article titled "Rethinking Marx" and put Karl Marx on the cover of its 28th of January 2009 European edition. While the mainstream media tended to conclude that Marx was wrong, this was not the view of socialists and left-leaning commentators.
You may think this is a fairly straightforward question with a simple answer. But think about it for a minute then ask yourself, "Who and what do I include in my family?" You might say your parents, your siblings, and/or your spouse. But what about the family pet? What if you were raised by your grandparents and your parents played little to no role in your life? Who, then, do you consider your family? Is family limited to genetically related individuals; those we typically think of as kin?
Questioning the basic concept of family is a relatively new phenomenon, though variations in what we consider a "family" are not. There are so many variations of "family" today that it is hard to define what, exactly, a family is. Generally, we think of a family as a domestic group of people, or a number of domestic groups linked through descent from: (1) a common ancestor, (2) marriage, or (3) adoption. Families generally have some degree of kinship.
But society increasingly accepts a number of variations on traditional family forms. Consider each of the following examples:
Each of the above groups differs from the traditional form of family: a man, a woman, and their children. This traditional notion of parents and children as family is called a nuclear family and is an invention of the Western World. It is a social construct that does not necessarily reflect the reality of family life for many people. In fact, with recent developments in the U.S., the nuclear family is no longer the primary form of social life in the U.S. According to recent census data, more adult women now live alone or are raising their children alone than are living with a spouse or raising their children with a spouse. What many people consider a family is not the only family form; families are diverse in both form and function.
Changes Over Time
The Function of Families
The sociobiology of Families
Race and Ethnicity
The structure of families traditionally hinges on relations between parents and children, between spouses, or both. Consequently, there is substantial variation in family forms around the world, varying from culture to culture. The most common form of family structure in the Western World is monogamy, which is the practice of having only one spouse at a time. A variety of other family structures exist. One prevalent form is polygamy, which broadly refers to any form of marriage in which a person has more than one spouse. Historically, polygamy has been practiced as polygyny (one man having more than one wife), as polyandry (one woman having more than one husband), or, less commonly, as "polygamy" (having many wives and many husbands at one time).
Another factor that leads to cultural variations in family forms is attitudes toward endogamy. Endogamy is the practice of marrying within a social group. Cultures that practice endogamy require marriage between specified social groups, classes, or ethnicities. Many people tend to marry members of their own social group, but there are some societies that practice endogamy very strictly and as part of their moral values, traditions, or religious beliefs. An example of a strictly endogamous religious group is the Yazidi in Northern Iraq, who prohibit any inter-clan marrying. Endogamy is a common practice among displaced cultures attempting to make roots in new countries as it encourages group solidarity and ensures greater control over group resources (which may be important to preserve where a group is attempting to establish itself within an alien culture).
Family structures of some kind are found in every society. Pairing off into formal or informal marital relationships originated in hunter-gatherer groups to forge networks of cooperation beyond the immediate family. Intermarriage between groups, tribes, or clans was often political or strategic and resulted in reciprocal obligations between the two groups represented by the marital partners. Even so, marital dissolution was not a serious problem as the obligations resting on marital longevity were not particularly high.
The development of horticultural or agriculture-based societies fundamentally changed the nature of marriage. With the advent of sedentary societies, marriage became one of the central institutions for forging economic and political relationships and was no longer viewed, at least among the aristocracy, as a relationship that should be based on love, companionship, or sexual attraction. Among the aristocratic elite, marriage became a means of developing alliances or amassing wealth. For the non-elites, marriage was a pragmatic way of supporting oneself: it was easier to survive if resources (i.e., food, labor power, childcare responsibilities, etc.) were pooled between several people.
You can see a good example of the changing nature of families in the family structure of Ancient Rome. In Ancient Rome the family structure was centered on the father (see paterfamilias). In this structure, fathers held great power (patria potestas) over those living with them: They could force marriage and divorce, sell children into slavery, claim dependents' property as their own, and possibly even claim the right to kill family members. Patria potestas extended over adult sons with their own households: A man was not considered a paterfamilias while his own father lived. A daughter, when she married, usually fell under the authority of the paterfamilias of her husband's household, although this was not always the case, as she could choose to continue recognizing her father's family as her true family. However, as Romans reckoned descent through the male line, any children she had would belong to her husband's family. Groups of related households formed a family. Families were based on blood ties (or adoption), but were also used for political and economic alliances. Ancient Roman marriage was often regarded more as a financial and political alliance than as a romantic association, especially in the upper classes.
Modern forms of family structure and marriage in the West have their roots in Christian philosophy and practice. The nuclear family emerged during the late medieval period and was formalized during the Council of Trent, in which marriage was defined as, "The conjugal union of man and woman, contracted between two qualified persons, which obliges them to live together throughout life." While a variety of family structures continue to exist around the world today, including polygamous and polygynous families in many societies (including the U.S., the predominant form is monogamy (though, as noted above, this is no longer the majority form). As described below, variations on monogamous relationships are increasingly prevalent (i.e., same-sex marriage), as are alternatives to monogamy (e.g., single-parent households).
The primary function of the family is to reproduce society, both biologically, through procreation, and socially, through socialization. Given these functions, one's experience of one's family shifts over time. From the perspective of children, the family is a family of orientation: the family functions to locate children socially, and plays a major role in their socialization. From the point of view of the parent(s), the family is a family of procreation: the family functions to produce and socialize children. In some cultures marriage imposes upon women the obligation to bear children. In northern Ghana, for example, payment of bridewealth signifies a woman's requirement to bear children, and women using birth control face substantial threats of physical abuse and reprisals.
Producing offspring is not the only function of the family. Marriage sometimes establishes the legal father of a woman's child; establishes the legal mother of a man's child; gives the husband or his family control over the wife's sexual services, labor, and/or property; gives the wife or her family control over the husband's sexual services, labor, and/or property; establishes a joint fund of property for the benefit of children; establishes a relationship between the families of the husband and wife. No society does all of these; no one of these is universal. In societies with a sexual division of labor, marriage, and the resulting relationship between a husband and wife, is necessary for the formation of an economically productive household. In modern societies marriage entails particular rights and privilege that encourage the formation of new families even when there is no intention of having children.
In almost all societies, marriage between brothers and sisters is
forbidden, with Ancient Egyptian, Hawaiian, and Inca royalty being the
rare exceptions. In many societies, marriage between some first cousins
is preferred, while at the other extreme, the medieval Catholic church
prohibited marriage even between distant cousins. The present day
Catholic Church still maintains a standard of required distance (in both
consanguinity and affinity) for marriage.
These sorts of restrictions are a form of exogamy. One exception to this pattern is in ancient Egypt, where marriage between brothers and sisters was permitted in the royal family — as it was also permitted in Hawaii and among the Inca. This privilege was denied commoners and may have served to concentrate wealth and power in one family. The consequence of the incest-taboo is exogamy, the requirement to marry someone from another group. The incest taboo may serve to promote social solidarity.
Courtship is the traditional dating period before engagement and marriage. It is an alternative to arranged marriages in which the couple doesn't meet before the wedding. During a courtship, a couple gets to know each other and decides if there will be an engagement. Courting includes activities such as dating where couples go together for some activity (e.g., a meal or movie). Courting can also take place without personal contact, especially with modern technology. Virtual dating, chatting on-line, sending text messages, conversing over the telephone, instant messaging, writing letters, and sending gifts are all modern forms of courting.
Courtship varies both by time period and by region of the world. One way courtship varies is in the duration; courting can take days or years. In the United Kingdom, a poll of 3,000 engaged or married couples suggested in an average duration between first meeting and engagement of 2 years and 11 months.
While the date is fairly casual in most European-influenced cultures, in some traditional societies, courtship is a highly structured activity, with very specific formal rules. In some societies, the parents or community propose potential partners, and then allow limited dating to determine whether the parties are suited. In Japan, some parents hire a matchmaker to provide pictures and résumés of potential mates, and if the couple agrees, there will be a formal meeting with the matchmaker and often parents in attendance; this is called Omiai. In more closed societies, courtship is virtually eliminated altogether by the practice of arranged marriages, where partners are chosen for young people, typically by their parents. Forbidding experimental and serial courtship and sanctioning only arranged matches is partly a means of guarding the chastity of young people and partly a matter of furthering family interests, which in such cultures may be considered more important than individual romantic preferences. Another variation of courtship is the bundling tradition, which likely originated in Scandinavia and was carried to the U.S. by immigrants. Bundling involved potential mates spending the night together in the same bed, though the couple was not supposed to engage in sexual relations. This practice ceased in the late 19th Century.
In earlier centuries, young adults were expected to court with the intention of finding a marriage partner, rather than for social reasons. However, by the 1920s, dating for fun was becoming an expectation, and by the 1930s, it was assumed that any popular young person would have lots of dates. This form of dating, though, was usually more chaste than is seen today, since pre-marital sex was not considered the norm even though it was widespread. As a result of social changes spurred by the sexual revolution of the 1960s and 1970s, the taboo of sex during dating began to wane. Couples today are more likely to "hook up" or "hang out" with large groups rather than go on old-fashioned, paired dates. In recent years, a number of college newspapers have featured editorials where students decry the lack of "dating" on their campuses. This may be a result of a highly-publicized 2001 study and campaign sponsored by the conservative American women's group Independent Women's Forum, which promotes "traditional" dating. Also, in recent years traditional dating has evolved and taken on the metamorphic properties necessary to sustain itself in today's world. This can be seen in the rise in internet dating, speed dating or gradual exclusivity dating (a.k.a. slow dating). Some theorize that courtship as it was known to prior generations has seen its last days and the next closest thing is gradual exclusivity, where the partners respect and value each other's individual lives but still maintain the ultimate goal of being together even if time or space does not permit it now.
Courtship is used by a number of theorists to explain gendering processes and sexual identity. Scientific research into courtship began in the 1980s after which time academic researchers started to generate theories about modern dating practices and norms. Both Moore and Perper found that, contrary to popular beliefs, courtship is normally triggered and controlled by women, driven mainly by non-verbal behaviours to which men respond. This is generally supported by other theorists who specialise in the study of body language. Feminist scholars, however, continue to regard courtship as a socially constructed (and male-led) process organised to subjugate women. The main weakness in this argument, however, lies in the continuing empirical evidence that the institutions of courtship are more strongly supported by women than men. Farrell reports, for example, that magazines about marriage and romantic fiction continue to attract a 98% female readership.
Systematic research into courtship processes inside the workplace as well two 10-year studies examining norms in different international settings continue to support a view that courtship is a social process that socialises both sexes into accepting forms of relationship that maximise the chances of successfully raising children. This may negatively impact women, particularly those seeking independence and equality at work.
Marriage is a governmentally, socially, or religiously recognized interpersonal relationship, usually intimate and sexual, that is often created as a form of contract. The most frequently occurring form of marriage is between a woman and a man, where the feminine term wife and the masculine husband are generally used to describe the parties to the contract. Other forms of marriage also exist. For example, polygamy, in which a person takes more than one spouse, is present in many societies. (See, for instance, the Kaingang, of Brazil, where close to 40% of the marriages were not monogamous but included multiple spousal partners.) Currently, the legal concept of marriage is expanding to include same-sex marriage in some areas.
The ceremony in which a marriage is enacted and announced to the community is called a wedding. The ways in which marriages are enacted have changed over time. In Europe during the Middle Ages, marriage was enacted by the couple promising verbally that they would be married to each other; the presence of a priest or other witnesses was not required. This promise was known as the "verbum". If made in the present tense (e.g. "I marry you"), it was unquestionably binding; if made in the future tense ("I will marry you"), it would, by itself constitute a betrothal. If the couple proceeded to have sexual relations, the union was a marriage. As part of the Reformation, the role of recording marriages and setting the rules for marriage passed to the state; by the 1600s many of the Protestant European countries had heavy state involvement in marriage. As part of the Counter-Reformation, the Catholic Church added a requirement of witnesses to the promise, which under normal circumstances had to include the priest.
The reasons people marry vary widely, but usually include: to publicly and formally declare their love, the formation of a single household unit, legitimizing sexual relations and procreation, social and economic stability, and the education and nurturing of children. A marriage can be declared by a wedding ceremony, which may be performed either by a religious officiator or through a similar government-sanctioned secular process. The act of marriage creates obligations between the individuals involved, and, in some societies, between the parties' extended families. Marriages are perpetual agreements with legal consequences, terminated only by the death of one party or by formal dissolution processes, such as divorce and annulment.
Schwartz and Mare examined trends in marriage over time and found that the old maxim,
"opposites attract" is less accurate of marriage than the maxim "birds
of a feather flock together." Their research focused on one specific
similarity in marital partners: education. They found that the
educational levels of American married couples decreased in similarity
slightly after World War II but has since increased substantially. As of
2003, one's level of educational attainment was a significant predictor
of the educational attainment of one's spouse. People without a high
school diploma are unlikely to marry someone with more educational
attainment, and people with a college degree are likely to marry people
with a similar level of educational attainment. Part of the reason why
education is so influential in determining the level of education of
one's spouse is because people tend to form groups based on levels of
education. First there are the groups formed in the process of becoming
educated (many people meet their spouses in school). But jobs after one
completes his/her education also tend to be grouped by level of
education. As a result, people spend more time with individuals of a
similar level of educational attainment. As most people tend to marry or
partner with individuals with whom they spend a lot of time, it is not
surprising that there is significant educational similarity between
One well-known attribute of marriage is that it tends to have health benefits. Happily married people tend to be healthier than unmarried people. However, unhappily married couples may not receive the same health benefits and may actually be less healthy than their single peers.
Kids raised in families with only one parent or with step-parents are more likely to have behavior problems than are kids raised with both parents. Previous research has attributed this to the fact that single-parent households tend to be poorer than dual-parent households and poverty is related to behavior problems. A recent study by Marcia Carlson adds another clue as to why children raised in single-parent households or households with step-parents might have more behavior problems. According to Carlson's study of nearly 3,000 teens, children whose fathers are more involved in their lives are less likely to have behavior problems. When the relationship between a child and the father is missing, children in single-parent and step-parent families are more likely to act out in negative ways. There is a take-home lesson from this study: One way to reduce behavior problems in teenagers is to encourage a close relationship between them and their fathers. There is also some evidence that the gender of children increases father involvement in the home. Thus, while having an active father helps improve behavior problems, fathers having sons helps improve their dedication and devotion to their families.
Another recent finding related to children is that parents have less influence on their children's' mate selections now than they used to. Rosenfeld and Kim found that a new life stage has developed in the latter half of the 20th century that they call the "independent life stage." This life stage is a period of time in which young adults leave home and live alone (often to go to college). Children used to remain in their parents home until they left to start their own family, but the independent life stage is a period of single-living prior to the formation of one's own family. As a result of this life stage, which also overlaps with the time in most children's' lives when they are beginning to look for life partners, parents are decreasingly involved in the decision-making process. Rosenfeld and Kim attribute the increase in interracial and same-sex marriages to the increasing prevalence of this life stage.
How parents raise children and childhood autonomy have also changed over time. Analyzing back issues of Parents magazine, Markella Rutherford found that parents face a difficult task of trying to balance authority with childhood autonomy. Children have gained more autonomy in private spaces in their homes (e.g., they are allowed to decorate their own rooms) but have lost autonomy in public areas outside the home (e.g., they are not allowed to go to the park alone). Some of the loss of autonomy is the result of a growing fear among parents of child predators, which is generally over-stated. This same factor may help explain the increase in autonomy in the home, as parents are trying to compensate for heightened supervision outside the home by allowing children greater freedom inside the home.
Kids are closely connected to a family's socioeconomic status as well. Kids are expensive to raise, and the more kids a family has, on average, the lower the social class (though this is a complicated relationship as the causal direction goes both ways). One illustration of this is the upward mobility of Catholics in the U.S. over the past few decades. Catholics in the U.S. used to be among the lower-middle sector of the population in terms of income and wealth. But as of the 1990s and 2000s, Catholics in the U.S. rank higher, just above the middle. Part of the reason for this has been a decrease in the size of Catholic families; Catholics are having about the same number of children as are non-Catholics in the U.S. With smaller families, Catholics are able to retain more of their income and turn it into savings, improving their socioeconomic status. Thus, the reduction in the number of kids among Catholics has actually improved their socioeconomic status on average.
Numerous studies have tried to determine why 50 per-cent of marriages
end in divorce within the first 25 years in the United States. Carrie Yodanis
takes a different approach, analyzing marriages that remain intact. Her
cross-national study of 22 countries suggests that where there is more
tolerance of divorce, there is greater gender equality within marriage.
Most researchers think that women are disadvantaged by the ease of
divorce. Where divorce is an option, women may feel more insecure and
are therefore reluctant to push for change within a marriage.
Contradicting these claims, Yodanis shows that the ease of divorce
increases women's equality within marriage. Divorce is a negotiating
tool for women who seek change in their relationships. In countries with
greater acceptance of divorce, there is a more equal division of
household work. Men risk losing their wives if they refuse to shape up
and help out around the house. Thus, while divorce has helped many women
leave unhealthy marriages in which they are at risk of great physical
and mental harm, the ability to get a divorce easily may also strengthen
Another factor that contributes to high divorce rates is the change in laws that made divorce easier in the 1970s in the U.S. The introduction of "No-fault divorce" laws removed the burden of proving that one's spouse had caused irreparable damage to the relationship (through adultery, battery, etc.). Women's rights advocates pushed for this change as it facilitated women's ability to exit unhealthy marriages. Other factors that may have also contributed to the rise in divorces is a cultural mystique that suggests marriage should be blissful and not have any problems. This is a common depiction of marriage in media, but is far from an accurate portrayal. The unrealistic expectations many couples bring to marriage heightens the probability of divorce if things don't align with those expectations.
The increasing participation of women in the workforce has provided women with greater financial security. With financial security comes independence. Thus, women's workforce participation also increases the odds of divorce, though this is similar to Yodanis's argument - it really just empowers women who are in bad marriages to feel like they can leave the marriage and not experience a dramatic decline in quality of life.
Finally, there is a social-psychological component to divorce as well. People have a tendency to unfairly judge other peoples' behavior as being "part of who they are" or part of their identity. However, they do not hold themselves to the same critical judgment; when they do something that is mean, it is situationally determined and doesn't reflect their identity. This is known as the fundamental attribution error and can lead to the dissolution of relationships. This occurs when the parties in the relationship attribute the negative behaviors they dislike to fundamental characteristics of the person rather than to the situation. There is evidence to suggest that specific types of couple therapy can help couples see where they are committing the fundamental attribution error and substantially reduce the odds of divorce.
Divorce and widowhood carry with them the burden of reduced health. Divorced people have 20% more chronic health conditions (e.g., heart disease, diabetes, cancer) than do married people. They also have 23% more mobility limitations (e.g., climbing stairs or walking a block). Divorcees experience declines in health because of two things that accompany divorce: reduced income and stresses over the dissolution of the marriage (e.g., things like negotiating child care). These differences in health remain even after an individual remarries.
Families and family stability are strongly influenced by social class. As the figure to the right illustrates, the probability of a first marriage ending is substantially higher for couples that live in poorer communities. It is also higher for blacks than for whites and Hispanics. The higher rates of divorce for individuals in lower social classes is largely attributable the greater stresses these couples face, particularly financial stress.
While marriage does increase the socioeconomic status of women, men reap many benefits from this type of living arrangement. For instance, while many women manage a household's finances, men generally retain control of the money. As a result, when heterosexual couples divorce, women are much less affluent and a large percentage of divorced, single women fall below poverty lines. Men also obtain greater mental health benefits from marriage than do women and report greater marital satisfaction than do women. The greater marital satisfaction men report is likely the result of the benefits they receive from marriage - companionship and household labor - while not having to sacrifice occupational success or career advancement. Married women, on the other hand, do often have to sacrifice occupational success or career advancement, leading to many highly skilled women leaving the workforce. Many of these women leave because the work environment does not allow for a balance between work and family; the lack of flexible work options for highly skilled women results in many of them "opting out" of the workforce, which is not a problem for men. Married women also have higher rates of mental illness than do single, widowed, and divorced women. Women who marry at younger ages generally have lower educational attainment (though the causal direction here goes both ways), have more kids, and are also more dependent on their husbands financially. In short, the benefits of marriage tend to favor men over women.
Societies have also at times required marriage from within a certain group. Anthropologists refer to these restrictions as endogamy. An example of such restrictions would be a requirement to marry someone from the same tribe. Racist laws adopted by some societies in the past, such as Nazi-era w:Germany, apartheid-era South Africa and most of the United States in the first half of the 20th century, which prohibited marriage between persons of different races, could also be considered examples of endogamy. In the U.S., these laws were largely repealed between 1940 and 1960. The U.S. Supreme Court declared all such laws unconstitutional in the case of Loving v. Virginia in 1967.
Cultures that practiced might admit that slave marriages formed but grant them no legal status. This was the practice under the Roman empire, so that in the Acts of Perpetua and Felicitas, the freewoman Perpetua could be described as "a married matron" but Felicitas as the "fellow-servant" of Revocatus — even though the Christians regarded, religiously, such marriages as binding. Likewise, slave marriages in the United States were not binding, so that many contrabands escaping slavery during the American Civil War sought official status for their marriages. Among the rights distinguishing serfdom from slavery was the right to enter a legally recognizable marriage.
The participants in a marriage usually seek social recognition for their relationship, and many societies require official approval of a religious or civil body. In the Protestant tradition, Calvin and his colleagues reformulated marriage through enactment of The Marriage Ordinance of Geneva, imposing, "The dual requirements of state registration and church consecration to constitute marriage." In England and Wales, it was Lord Hardwicke's Marriage Act that first required a formal ceremony of marriage, thereby curtailing the practice of Fleet Marriage.
In many jurisdictions, the civil marriage ceremony may take place during the religious marriage ceremony, although they are theoretically distinct. In most American states, the marriage may be officiated by a priest, minister, rabbi or other religious authority, and in such a case the religious authority acts simultaneously as an agent of the state. In some countries, such as France, Spain, Germany, Turkey, Argentina, and Russia, it is necessary to be married by the state before having a religious ceremony.
Conversely, there people who have religious ceremonies that are not recognized by civil authorities. Examples include widows who stand to lose a pension if they remarry and so undergo a marriage only in the eyes of God and the community; homosexual couples (where same-sex marriage is not legally recognized); some sects which recognize polygamy (see, Mormon fundamentalism), retired couples who would lose pension benefits if legally married, Muslim men who wish to engage in polygamy that is condoned in some situations under Islam, and immigrants who do not wish to alert the immigration authorities that they are married either to a spouse they are leaving behind or because the complexity of immigration laws may make it difficult for spouses to visit on a tourist visa.
In Europe, it has traditionally been the churches' office to make marriages official by registering them. It was a significant step towards a clear separation of church and state and advance toward a secular society when German Chancellor Otto von Bismarck introduced the Zivilehe (civil marriage) in 1875. This law made the declaration of the marriage before an official clerk of the civil administration (both spouses affirming their will to marry) the procedure to make a marriage legally valid and effective, and reduced the clerical marriage to a private ceremony.
Most of the world's major religions tell couples they should marry before having sexual intercourse. They teach that unmarried people should not have sex, which they refer to as fornication. Fornication is sometimes socially discouraged or even criminalized. Sex with a married person other than one's spouse, called adultery, is generally condemned by most religions and has even been criminalized in some countries. Despite this condemnation, it is a widespread practice. About 10-15% of women and 20-25% of men in the U.S. engage in extramarital sex.
One recent trend illustrating the changing nature of families is the
rise in prevalence of single-parent or one-parent households. While
somewhat more common prior to the 20th century due to the more frequent
deaths of spouses, in the late 19th and early 20th century, the nuclear
family (husband, wife, and kids) became the societal norm in most
Western nations. But what was the prevailing norm for much of the 20th
century is no longer the actual norm, nor is it perceived as such.
In the 1950s, most people believed that single-parent households were "immoral," but by 1978, only 25% of Americans held that belief. Legal reforms in the 1960s and 1970s expanded the rights of nonmarital children and unwed mothers, breaking down the distinction between "legitimate" and "illegitimate". Additionally, the declining purchasing power of male wages in the 1970s along with the increased levels of education among women led to an increasing percentage of married women (with and without kids) to enter the workforce. This change in the economic structure of the U.S. (i.e., the inability to support a nuclear family on a single wage), had significant ramifications on family life. Women and men began delaying the age of first marriage in order to invest in their earning power before marriage by spending more time in school. The increased levels of education among women - women now earn more than 50% of bachelor's degrees - positioned women to survive, economically, without the support of a husband. By 1997, 40% of births to unmarried American women were intentional, and, despite a still prominent gender gap in pay, women are able to survive as single mothers.
However, despite their ability to support their children, single parents often struggle financially to make ends meet. There is some evidence, however, that when single parents move in with their parents (i.e., the child's grandparents), the odds of the single-parent led family falling under the poverty line are reduced by 80%. Thus, three generation households are more financially secure environments for raising children than are two generation households if it is a single parent who is trying to raise children alone.
is an intimate relationship which includes a common living place and
which exists without the benefit of legal, cultural, or religious
sanction. It can be seen as an alternative form of marriage, in that, in
practice, it is similar to marriage, but it does not receive the same
formal recognition by religions, governments, or cultures.
The cohabiting population, although inclusive of all ages, is mainly made up of those between the ages of 25 and 34. Several common reasons that lead couples to decide to live together include:
While cohabitation does address all of the issues above, it does not improve the quality or duration of marriages. Individuals who cohabit before engagement (about 43% of all couples) report slightly lower marital satisfaction, dedication, and confidence as well as more negative communication as compared with individuals who cohabit after engagement (16.4% of all couples in the US) and those who cohabit after marriage (40.5% of couples in the US). Early cohabitors also have a great potential for divorce. The differences are generally small, but are not attributable to demographic differences between the three groups.
Some couples prefer cohabitation because it does not legally commit them for an extended period of time, and because it is easier to establish and dissolve. In some countries (such as Scotland) and some states in the United States, cohabiting is viewed as a legal relationship and is referred to as a common-law marriage after the duration of a specified period or the birth of a child to the couple. In California, such couples are defined as people who "have chosen to share one another's lives in an intimate and committed relationship of mutual caring," including having a "common residence." This recognition led to the creation of a Domestic Partners Registry, which is available to same-sex couples and opposite-sex couples in which at least one of the partners is age 62 or older, granting them limited legal recognition and some rights similar to those of married couples. Today, seven states (North Carolina, Mississippi, Virginia, West Virginia, Florida, Idaho and Michigan) still criminalize cohabitation by opposite-sex couples, although anti-cohabitation laws are generally not enforced.
In most States of the United States, there is no legal registration or definition of cohabitation, so demographers have developed various methods of identifying cohabitation and measuring its prevalence. The most important of these is the Census Bureau, which currently describes an "unmarried partner" as "A person age 15 years and over, who is not related to the householder, who shares living quarters, and who has a close personal relationship with the householder." Before 1995, the Bureau euphemistically identified any "unrelated" opposite-sex couple living with no other adults as POSSLQs, or Persons of Opposite Sex Sharing Living Quarters (these numbers are still reported to show historical trends). Cohabitation often does not have clear start and end dates, as people move in and out of each other's homes and sometimes do not agree on the definition of their living arrangement at a particular moment in time.
In 2005, the U.S. Census Bureau reported 4.85 million cohabiting couples, up more than 1,000 percent from 1960, when there were 439,000 such couples. More than half of couples in the US lived together, at least briefly, before walking down the aisle. In Sweden, Denmark and Norway, cohabitation is very common; roughly 50% of all children are born into families of unmarried couples. In late 2005, 21% of families in Finland consisted of cohabiting couples (all age groups). Of couples with children, 18% were cohabiting. Generally, cohabitation amongst Finns is most common for people under 30. Legal obstacles for cohabitation were removed in 1926 in a reform of the Finnish penal code. In the UK 25% of children are now born to cohabiting parents. In France, 17.5% of couples were cohabiting as of 1999. In contrast to Europe and the U.S., cohabitation rates in the Middles East are much lower. In Israel, less than 3% of all couples cohabit. Cohabitation runs contrary to Islamic Law and is not common in predominantly Muslim countries.
While homosexuality has existed for thousands of years among both humans and other animals,
formal marriages between homosexual partners is a relatively recent
phenomenon. As of 2009, only two states in the United States recognized
marriages between same-sex partners, Massachusetts & Iowa, where
same-sex marriage was formally allowed as of May 17, 2004 and April
2009, respectively. Three additional states allow same-sex civil unions, New Jersey, Connecticut, and Vermont.
Between May 2004 and December 2006, 7,341 same-sex couples married in
Massachusetts. Assuming the percentage of homosexuals in Massachusetts
is similar to that of the rest of the nation (roughly 2.3% of men and
1.3% of women), the above number indicates that 16.7% of homosexuals in
Massachusetts married during that time.
Given how recent same-sex marriage is in the U.S., there is very
little existing data on marital dissolution rates. There is, however,
data on marital dissolution rates (i.e., divorce) in other countries
where same-sex marriage has been legally allowed for a longer period of
time. Andersson et al.
examined precisely this issue in Norway and Sweden, where same-sex
marriages or civil unions have been legally recognized since the
mid-1990s. Andersson et al. found several ways in which homosexual
unions differ from heterosexual unions in these countries:
Andersson et al. attribute the higher divorce rate to a combination
of factors, including: less support from the community and less
encouragement or pressure from family and friends to make the marriage
One question often raised about same-sex unions is how it influences any children raised in these households. Are children raised by same-sex parents more likely to be homosexual? The answer is surprising to some: No. Children raised in same-sex parented households are no more or less likely to be homosexual than children raised in heterosexual parented households. Even so, there are some differences. For instance, children raised in same-sex households have more complex perspectives on sexuality and have well-articulated responses to discrimination.
The starting point for any study of religion should begin with a definition of the concept. This is particularly important in the study of religion because the definition determines which groups will be included in the analysis. Three general definitions have been proposed, each of which will be discussed briefly. Each definition has its merits and detriments, but what one often finds is that the definition of religion employed by a particular researcher or in the investigation of a particular topic depends on the question being asked.
Sacred vs. Profane
Religion as Existential Questioning
Religion as Supernature
Having defined religion, we now move to one of the most common classification schemes employed in sociology for differentiating between different types of religions. This scheme has its origins in the work of Max Weber, but has seen numerous contributions since then. The basic idea is that there is a continuum along which religions fall, ranging from the protest-like orientation of sects to the equilibrium maintaining churches. Along this continuum are several additional types, each of which will be discussed in turn. The reader may notice that many of the labels for the types of religion are commonly employed by non-sociologists to refer to religions and tend to be used interchangeably. Sociologists, when speaking technically, will not use these labels interchangeably as they are designations for religions with very specific characteristics.
Before describing these different religions, it is important for the reader to understand that these classifications are a good example of what sociologists refer to as ideal types. Ideal types are pure examples of the categories. Because there is significant variation in each religion, how closely an individual religion actually adheres to their ideal type classification will vary. Even so, the classification scheme is useful as it also outlines a sort of developmental process for religions.
Church and Ecclesia
Cults or New Religious Movements
Many of the early sociological theorists proposed theories attempting to explain religion. In addition to these classical approaches to understanding religion, one modern explanation for the continued high levels of religiosity will be proposed along with a social psychological explanation that will attempt to explain the continued attraction of religion. These theories approach religion from slightly different perspectives, trying to explain: (1) the function of religion in society; (2) the role of religion in the life of the individual; and (3) the nature (and origin) of religion.
If one were to ask any sociologist of religion which are the world religions, they would likely give the standard answer that there are five world religions:
Traditionally, these have been considered world religions due to their size and/or influence on society. A detailed description of these religions is beyond the scope of this chapter and the interested reader is encouraged to follow the above links for more information.
One note is, however, in order concerning these religious groups. The classification of these groups as world religions is, like all classifications, artificial. Considering the remarkable dissimilarity between these five religious bodies, that they are grouped together at all is remarkable. Three are religions of the book and can be practiced somewhat distinctly from one’s primary cultural identity (e.g., being an American and Episcopalian), while two are better understood as synonymous with culture (Buddhism and Hinduism). Additionally, the religions of the book have numerous branches, some so dissimilar that there is more contention within the world religions than between them (e.g., Mormons vs. fundamentalist Christians, Catholics vs. Episcopalians). Finally, while four of these religious groups are very populous, Judaism is not. In short, classification as a world religion seems a little arbitrary. Even so, most people should make an effort to familiarize themselves with these religious groups to facilitate understanding.
Religion and Gender
Religion and Race
Religion and Class
Religion and Education
Religion and Health
Despite the claims of many classical theorists and sociologists, religion continues to play a vital role in the lives of individuals. In America, for example, church attendance has remained relatively stable in the past 40 years. In Africa and South America, the emergence of Christianity has occurred at a startling rate. While Africa could claim roughly 10 million Christians in 1900, recent estimates put that number closer to 200 million. The rise of Islam as a major world religion, especially its newfound influence in the West, is another significant development. In light of these developments, sociologists have been forced to reconsider the early proclamations of the demise of religion. In addition to discussing secularization and how the theory has been modified due to the continued existence of religion, religious fundamentalism is briefly touched upon as it is playing a significant role in society today.
Current Issues in the Study of Secularization
Perhaps the most well known definition of religion is that provided by Emile Durkheim. Durkheim argued that the definition of religion hinged on the distinction between things that are sacred (set apart from daily life) and things that are profane (everyday, mundane elements of society). The sacred elements of social life are what make up religion.
For example, the Torah in Judaism is sacred and treated with reverence and respect. The reverential treatment of the Torah would be contrasted with all sorts of more mundane things like cars or toys, which, for most people, are not considered sacred. Yet, the acute reader will be quick to point out that for some, cars (and even toys) are considered sacred and treated almost as reverentially as the Torah is treated in Judaism. This introduces one of the most significant criticisms of this definition - the typology can include things that are not traditionally understood to be religious (like cars or toys). As a result, the definition is extremely broad and can encompass substantial elements of social life. For instance, while most people in the United States would not consider their nationalism to be religious, they do hold the flag, the nation's capitol, and other national monuments to be sacred. Under this definition, nationalism would be considered religion.
Another definition of religion among social scientists (particularly social psychologists) views religion as any attempt to answer existential questions (e.g., 'Is there life after death?). This definition casts religion in a functional light as it is seen as serving a specific purpose in society. As is the case with the sacred/profane typology, this definition is also often critiqued for being broad and overly encompassing. For instance, using this definition, someone who attends religious services weekly but makes no attempt to answer existential questions would not be considered religious. At the other extreme, an atheist who believes that existence ends with physical death, would be considered religious because he/she has attempted to answer a key existential question. (For a critique of this definition, see Dobbelaere.)
The third social scientific definition views religion as the collective beliefs and rituals of a group relating to supernature. This view of religion draws a sometimes ambiguous line between beliefs and rituals relating to empirical, definable phenomena and those relating to undefinable or unobservable phenomena, such as spirits, god(s), and angels. This definition is not without its problems as well, as some argue it can also include atheists who have a specific position against the existence of a god (or gods). Yet because the beliefs and rituals are understood to be shared by a group, this definition could be argued to exclude atheists. Despite the problems with this last definition, it does most closely adhere to the traditional (and popular) view of what constitutes a religion.
The first type of religion is the church. The church classification describes religions that are all-embracing of religious expression in a society. Religions of this type are the guardians of religion for all members of the societies in which they are located and tolerate no religious competition. They also strive to provide an all-encompassing worldview for their adherents and are typically enmeshed with the political and economic structures of society.
The classical example of a church is the Roman Catholic Church, especially in the past. Today, the Roman Catholic Church has been forced into the denomination category because of religious pluralism or competition among religions. This is especially true of Catholicism in the United States. The change from a church to a denomination is still underway in many Latin American countries where the majority of citizens remain Catholics.
A slight modification of the church type is that of ecclesia. Ecclesias include the above characteristics of churches with the exception that they are generally less successful at garnering absolute adherence among all of the members of the society and are not the sole religious body. The state churches of some European countries would fit this type.
The denomination lies between the church and the sect on the continuum. Denominations come into existence when churches lose their religious monopoly in a society. A denomination is one religion among many. When churches and/or sects become denominations, there are also some changes in their characteristics. Johnstone provides the following eight characteristics of denominations:
Most of the major religious bodies in the U.S. are denominations (e.g., Baptists, Methodists, Lutherans).
Sects are newly formed religious groups that form to protest elements of their parent religion (generally a denomination). Their motivation tends to be situated in accusations of apostasy or heresy in the parent denomination; they are often decrying liberal trends in denominational development and advocating a return to true religion.
Interestingly, leaders of sectarian movements (i.e., the formation of a new sect) tend to come from a lower socio-economic class than the members of the parent denomination, a component of sect development that is not entirely understood. Most scholars believe that when sect formation does involve social class distinctions they involve an attempt to compensate for deficiencies in lower social status. An often seen result of such factors is the incorporation into the theology of the new sect a distaste for the adornments of the wealthy (e.g., jewelry or other signs of wealth).
Another interesting fact about sects is that after their formation, they can take only three paths - dissolution, institutionalization, or eventual development into a denomination. If the sect withers in membership, it will dissolve. If the membership increases, the sect is forced to adopt the characteristics of denominations in order to maintain order (e.g., bureaucracy, explicit doctrine, etc.). And even if the membership does not grow or grows slowly, norms will develop to govern group activities and behavior. The development of norms results in a decrease in spontaneity, which is often one of the primary attractions of sects. The adoption of denomination-like characteristics can either turn the sect into a full-blown denomination or, if a conscious effort is made to maintain some of the spontaneity and protest components of sects, an institutionalized sect can result. Institutionalized sects are halfway between sects and denominations on the continuum of religious development. They have a mixture of sect-like and denomination-like characteristics. Examples include: Hutterites and the Amish.
Most of the well-known denominations of the U.S. existing today originated as sects breaking away from denominations (or Churches, in the case of Lutheranism). Examples include: Methodists, Baptists, and Seventh-day Adventists.
are, like sects, new religious groups. But, unlike sects, they can form without breaking off from another religious group (though they often do). The characteristic that most distinguishes cults from sects is that they are not advocating a return to pure religion but rather the embracement of something new or something that has been completely lost or forgotten (e.g., lost scripture or new prophecy). Cults are also more likely to be led by charismatic leaders than are other religious groups and the charismatic leaders tend to be the individuals who bring forth the new or lost component that is the focal element of the cult.
Cults, like sects, often integrate elements of existing religious theologies, but cults tend to create more esoteric theologies from many sources. Cults emphasize the individual and individual peace. Cults also tend to attract the socially disenchanted or unattached (though this isn't always the case; see Aho 1990 and Barker 1984). Cults tend to be located in urban centers where they can draw upon large populations for membership. Finally, cults tend to be transitory as they often dissolve upon the death or discrediting of their founder and charismatic leader.
Cults, like sects, can develop into denominations. As cults grow, they bureaucratize and develop many of the characteristics of denominations. Some scholars are hesitant to grant cults denominational status because many cults maintain their more esoteric characteristics (e.g., Temple Worship among Mormons). But given their closer semblance to denominations than to the cult type, it is more accurate to describe them as denominations. Some denominations in the U.S. that began as cults include: Christian Science, and The Nation of Islam.
Finally, it should be noted that there is a push in the social scientific study of religion to begin referring to cults as New Religious Movements or NRMs. The reasoning behind this is because cult has made its way into popular language as a derogatory label rather than as a specific type of religious group. Most religious people would do well to remember the social scientific meaning of the word cult and, in most cases, realize that three of the major world religions originated as cults, including: Islam, Christianity, and Buddhism.
The Structural-Functional approach to religion has its roots in Emile Durkheim's work on religion. Durkheim argued that religion is, in a sense, the celebration and even (self-) worship of human society. Given this approach, Durkheim proposed that religion has three major functions in society:
The primary criticism of the structural-functional approach to religion is that it overlooks religion's dysfunctions. For instance, religion can be used to justify terrorism and violence. Religion has often been the justification of and motivation for war. In one sense, this still fits the structural-functional approach as it provides social cohesion among the members of one party in a conflict (e.g., the social cohesion among the members of a terrorist group is high), but in a broader sense, religion is obviously resulting in conflict, not the resolution of such.
The social-conflict approach is rooted in Marx's analysis of capitalism. According to Marx, religion plays a significant role in maintaining the status quo. Marx argued that religion was actually a tool of the bourgeoisie to keep the proletariat content. Marx argued that religion is able to do this by promising rewards in the after-life rather than in this life. It is in this sense that Marx said, "Religion is the sigh of the oppressed creature, the feeling of a heartless world, and the soul of soulless circumstances. It is the opium of the people... The abolition of religion as the illusory happiness of the people is the demand for their real happiness" (p. 72). What Marx meant is that it would be necessary for the proletariat to throw off religion and its deceit about other-worldly rewards in order for the proletariat to rise up against the bourgeoisie and gain control over the means of production so they could realize this-worldly rewards. Thus, the social-conflict approach to religion highlights how it functions to maintain social inequality by providing a worldview that justifies oppression.
It should be reiterated here that Marx's approach to sociology was critical in the sense that it advocated change (in contrast to the knowledge for knowledge's sake approach). Because criticism of the system in place when he was writing was inherent in Marx's approach, he took a particular stand on the existence of religion, namely, that it should be done away with.
The social constructionist approach to religion presents a naturalistic explanation of the origins of religion. Berger laid a framework for this approach, "Religion is the human enterprise by which a sacred cosmos is established. Put differently, religion is cosmization in a sacred mode. Use of the word sacred in this context refers to a quality of mysterious and awesome power, other than man and yet related to him, which is believed to reside in certain objects of experience" (p. 25). In other words, for the social constructionist, religion is not created by (or for) supernatural beings but rather is the result of societies delineating certain elements of society as sacred. In the social constructionist frame of mind, these elements of society are then objectified in society so they seem to take on an existence of their own. As a result, they can then act back on the individual (e.g., the influence of a religion on the individual).
In short, plausibility structures are the societal elements that provide the support for a set of beliefs (not necessarily religious), including people, institutions, and the processes by which the beliefs are spread, e.g. socialization. Another important element to consider of plausibility structures is mentioned by Berger, "When an entire society serves as the plausibility structure for a religiously legitimated world, all the important social processes within it serve to confirm and reconfirm the reality of this world" (p. 47). In other words, in certain societies, every component of society functions to reinforce the belief system. A good example of this may be Iran, where everything is structured to reinforce the Islamic faith as reality.
Religious pluralism is the belief that one can overcome religious differences between different religions and denominational conflicts within the same religion. For most religious traditions, religious pluralism is essentially based on a non-literal view of one's religious traditions, allowing for respect to be engendered between different traditions on core principles rather than more marginal issues. It is perhaps summarized as an attitude which rejects focus on immaterial differences and instead gives respect to those beliefs held in common.
The existence of religious pluralism depends on the existence of freedom of religion. Freedom of religion is when different religions of a particular region possess the same rights of worship and public expression. Freedom of religion is consequently weakened when one religion is given rights or privileges denied to others, as in certain European countries where Roman Catholicism or regional forms of Protestantism have special status. (For example see the Lateran Treaty and Church of England; also, in Saudi Arabia the public practice of religions other than Islam is forbidden.) Religious freedom has not existed at all in some communist countries where the state restricts or prevents the public expression of religious belief and may even actively persecute individual religions (see for example North Korea).
Religious Pluralism has also been argued to be a factor in the continued existence of religion in the U.S. This theoretical approach proposes that because no religion was guaranteed a monopoly in the U.S., religious pluralism led to the conversion of religions in the U.S. into capitalist organizations. As a result, religions are now better understood as capitalist corporations peddling their wares in a highly competitive market than they are as monopolistic Churches like Roman Catholicism was prior to The Reformation (or, some might argue, still is in Latin America) or as small, fervent, protest-like sects are. The result of religious pluralism is, like capitalism generally in the U.S., a consumer attitude: people consume religion like they do other goods. Because religions are good at marketing themselves as the providers of social psychological compensators (see below), they have been successful.
The primary social-psychological reason why religion continues to exist is because it answers existential questions that are difficult, if not impossible, to address scientifically. For instance, science may not be able to address the question of what happens when someone dies other than to provide a biological explanation (i.e., the body's cells eventually die due to lack of nutrition, the body then decomposes, etc.). Science is also unable to address the question of a higher purpose in life other than simply to reproduce. Finally, science cannot disprove or prove the existence of a higher being. Each of these existential components are discussed below in greater detail.
Studies have found that fear is a factor in religious conversion. Altemeyer and Hunsberger (1997), in their book Amazing Conversions, note that one of the primary motivations for people to seek religion was fear of the unknown; specifically, fear of the after-life and what it portends. While fear likely does not motivate all religious people, it certainly is a factor for some. Religion can provide a non-falsifiable answer to the question of what happens after people die. Such answers can provide comfort for individuals who want to know what will happen when they die.
Religion providing a purpose in life was also a motivation found by Altemeyer and Hunsberger (1997) in their analysis of religious converts. Batson et. al. and Spilka, Hunsberger, Gorsuch, and Hood also point to this factor as an explanation for the continued interest in religiosity. Interestingly, Diener, in his research on subjective well-being (SWB) notes that one of the keys to high SWB (a.k.a. happiness) is a goal or purpose in life. However, he introduces a caveat that is particularly telling for religious individuals – for the most positive impact on SWB, goals should be difficult but attainable. Difficult but attainable is a good description of salvation for religious people. People have to work toward salvation, but they believe it can be achieved. Thus, religion can provide a goal and purpose in life for people who believe they need one.
Belief in God is attributable to a combination of the above factors (i.e., God's existence alleviates fear of death and provides meaning), but is also informed by a discussion of socialization. The biggest predictor of adult religiosity is parental religiosity; if a person's parents were religious when he was a child, he is likely to be religious when he grows up. Children are socialized into religion by their parents and their peers and, as a result, they tend to stay in religions. Alternatively, children raised in secular homes tend not to convert to religion. This is the underlying premise of Altemeyer and Hunsberger's (1997) main thesis – they found some interesting cases where just the opposite seemed to happen; secular people converted to religion and religious people became secular. Despite these rare exceptions, the process of socialization is certainly a significant factor in the continued existence of religion.
Combined, these three social-psychological components explain, with the help of religious pluralism, the continued high levels of religiosity in the U.S. People are afraid of things they do not understand (death), they feel they need a purpose in life to be happy (a.k.a. SWB), and they are socialized into religion and believing in God by parents.
Batson et. al. provide a clear summary of the differences in religiosity between men and women:
One explanation for the greater involvement of women in religion is socialization. Batson et. al. discuss the idea that women may be socialized into roles in which religion is more highly emphasized than it is in men's roles.
Counter-intuitively, even though women are more religious than men, many religions continue to disenfranchise women. Roughly 50% of the major denominations in the U.S. today do not allow women to be ordained or otherwise serve in ways that are equal to men. Denominations that do not allow female ordination include: Roman Catholicism, Southern Baptists, and Mormons. The primary reasons these religions refuse to allow women to be ordained are Biblical literalism (believing the Bible is the literal word of god and not recognizing that it is a historical work written in a different time) and sacramentalism (the belief that the person performing sacramental rituals must represent Jesus in his "manliness"). However, Chaves, who delineated these reasons in his book on female ordination, notes that these are more akin to "manifest" reasons and the real or latent reason is because these religions continue to cater to a specific market niche - individuals who oppose modernity. Fundamentalist religions in general - including fundamentalist Islam and fundamentalist Christianity - aim primarily to retain the power of men and subjugate women.
Even within the religions that do allow women equal rights and ordination, women experience discrimination. Women who pursue ordination in these religions find it harder to find a job pastoring a congregation, are more likely to be assistant pastors than are men, and are more likely to find jobs in congregations that are smaller, rural, and/or pay less. As of 2006, women make up about 15% of clergy in the U.S.
Batson et. al. provide a clear summary of differences in religiosity by race (limited presently to Americans of African and European descent). They include five distinctions in their discussion. If you are an American of African descent, you are more likely to:
Batson et. al. attributes this to the religious institutions' role in the lives of Americans of African descent. Religion has been one of the primary resources that African descendants have drawn upon since their arrival in the U.S. Religion has provided a sense of community and support for African-Americans and was also extremely influential in the Civil Rights Movement As a result, religion has a more prominent role in the day-to-day lives of African-Americans.
Religion is also divided by race. Only 8% to 10% of congregations in the U.S. today are multi-racial (meaning no one race/ethnicity makes up more than 80% of the congregation). There are complicated historical reasons for this. During the U.S. period of slavery, blacks and whites worshiped in the same churches, though blacks were relegated to the balcony and primarily taught to be obedient to their masters. After the American Civil War, former slaves left the white-dominated religions and created their own as they were mistreated in the white-dominated churches. Today, predominately black churches and predominately white churches remain distinct with very few churches catering to mixed race congregations (though megachurches tend to be more multi-racial).
Emerson and Smith convincingly argue that white Evangelical Christians in the U.S., because of their belief in individualism, actually contribute to racial inequality. This is the result of white Evangelicals refusing to see structural factors that contribute to inequality and their proclivity to blame poor blacks for their poverty. White Evangelical Christians are more likely to attribute black/white inequality it to innate biological inferiority or laziness than are white Mainline Christians and the non-religious.
Socioeconomic status (SES) or class tends to be associated more with how religion is practiced rather than degree of religiosity (i.e., very religious vs. not very religious). Members of lower classes tend to associate with more fundamentalist religions and sect-like groups. Members of the middle class tend to belong to more formal churches. "In the United States, Presbyterians and Episcopalians tend to be above average in SES; Methodists and Lutherans about average; and Baptists and members of Protestant fundamentalist sects below average" (pp.38-39). These variations in SES by religious groups are illustrated in the figure below.
Like income, educational attainment tends to vary by religious group. People in more fundamentalist religious groups tend to have lower levels of educational attainment while people in more liberal religious groups tend to have higher levels of educational attainment. This relationship between education and religion is illustrated in the figure below.
An important study by Johnson draws a complex but interesting picture of the relationship between religion and education attainment. Johnson found a dichotomization of religiosity as a result of college education. Those who make it through college with their religious beliefs intact tend to be more devout than those who do not attend college to begin with yet remain religious. On the other side, those who don't make it through college with their religious beliefs intact end up far less orthodox and are more likely to disavow religion altogether. The relationship between education and religiosity is a dichotomization – college education strengthens both religiosity and irreligiosity, it just depends on where you end up. Johnson's finding is particularly insightful in light of the social psychological theory of cognitive dissonance, which argues that religious people will (at least initially) reinforce their beliefs in light of disconfirming evidence.
According to Batson et. al., the relationship between religion and mental health is highly nuanced. In order to understand this nuanced relationship, it is necessary to clarify the different types of religiosity Batson et. al. are studying. Batson et. al. distinguish between three types of religiosity. These types or orientations stem from the work of Gordon Allport who distinguished two types of religiosity and provided their corresponding labels: intrinsic and extrinsic religiosity. Extrinsic religiosity refers to people who use religion as a means to an end (e.g., social contacts). Intrinsic religiosity refers to people who see religion as the end (e.g., religion is the answer to life's questions). Batson et. al. add a third – quest religiosity. Quest religiosity refers to the religious seeker who constantly asks questions and may not believe there are any clear answers to them.
If one does not take into consideration the different types of religiosity (i.e., extrinsic, intrinsic, and quest), religion tends to be associated with poorer mental health (p. 240). Specifically, Batson et. al. find a negative relationship between religion and three components of mental health, "personal competence and control, self-acceptance or self-actualization, and open-mindedness and flexibility" (p. 240).
However, if one does take into consideration the different types of religiosity, then intrinsic and quest oriented individuals tend to see mental health benefits from their religious involvement. Extrinsically-oriented individuals, on the other hand, find that their religious involvement results in a negative influence on their mental health (p. 289).
Secularization is a varied term with multiple definitions and levels of meaning. It should also be noted that in addition to multiple definitions, secularization is both a theory and a process. By theory, it is meant that some scholars (e.g. Freud, Weber) believed that as society modernized it would also see a decline in levels of religiosity. This understanding of classical secularization theory is currently being refined and modified (see discussion below). The 'process' component of secularization would refer to how the theory is actualized. It is in this sense that secularization has multiple definitions. The most common meaning is in reference to the decline of levels of religiosity in society, but this is a broad and diffuse meaning that should be clarified by referring to one of the more specific meanings outlined below.
Sommerville outlined six (6) uses of the term secularization in the scientific literature. The first five are more along the lines of definitions while the sixth application of the term is more of a 'clarification of use' issue:
At present, secularization (as understood in definition five above) is being debated in the sociology of religion. Some scholars have argued that levels of religiosity are not declining (though their argument tends to be limited to the U.S., an admitted anomaly in the developed world). As there appears to be some merit to this position, other scholars have countered by introducing the idea of neo-secularization, which broadens the definition of individual level religious decline by arguing that secularization can also refer to the decline of religious authority. In other words, rather than using a-religious apostates as the solitary measure of a population's secularity, neo-secularization theory argues that individuals are increasingly looking outside of religion for authoritative positions on different topics. Neo-secularizationists would argue that religion is no longer the authority on issues like whether to use birth control and would therefore argue that while religious affiliation may not be declining in the U.S. (a debate still taking place), religion's authority is declining and secularization is taking place.
Fundamentalism describes a movement to return to what is considered the defining or founding principles of a religion. It has especially come to refer to any religious enclave that intentionally resists identification with the larger religious group in which it originally arose, on the basis that fundamental principles upon which the larger religious group is supposedly founded have become corrupt or displaced by alternative principles hostile to its identity. A full analysis of what constitutes religious fundamentalism is beyond the scope of this chapter. However, the interested reader is encouraged to explore this topic further by reading the Wikipedia article on fundamentalism.
Religious fundamentalism is of great importance to sociologists because of its increasingly prominent role in social life, especially politics. Kenneth Wald points out how religious fundamentalism can be detrimental to politics, specifically a democratic system. The fundamentalist approach to politics can hurt a democratic system because of fundamentalists' unwillingness to compromise. Religious fundamentalists tend to take the view that 'God said it, so it will have to be this way.' Because anything short of God’s will is unacceptable, religious fundamentalists don't allow for a middle ground - which is a vital element of the democratic process. While widely associated with religious fundamentalism, suicide bombers are not exclusively religious; only 43% are identifiably religious. Most suicide bombings are for strategic, political reasons, and usually involve a dispute over resources. Religion can serve as a justification and can even help motivate the bombers, but not all terrorism is inspired by religion and religious fundamentalists.
What the future of religious fundamentalism holds for human society is unknown, but because of the impact of this particular religious approach on society today, religious fundamentalism warrants continued study.
The World Health Organization (WHO) defines health as "a state of complete physical, mental and social well-being, and does not consist only of the absence of disease or infirmity" (source) Though this is a useful definition, some would consider it idealistic and non-realistic because using the WHO definition classifies 70-95% of people as unhealthy. There can also be other definitions of health, e.g. statistical (systolic blood pressure and diastolic blood pressure) and functional (ability to carry out Activities of Daily Living or ADLs). The WHO definition also overlooks the fact that several factors influence both the definition of health and standards of health.
What it means to be healthy can vary from culture to culture and is often connected with advances in technology. In some cultures, larger body sizes are seen as a sign of healthiness as it indicates an individual has a preponderance of food. In other cultures, largess is more closely associated with unhealthy lifestyles (e.g., lack of exercise, poor eating habits, etc.). Advances in technology have also expanded the idea of what it means to be healthy. What are understood today to be healthy practices were not emphasized prior to clear understandings of disease and the contributions of lifestyles to health.
Health care (or healthcare) is an industry associated with the prevention, treatment, and management of illness along with the promotion of mental and physical well-being through the services offered by the medical and allied health professions. Healthcare is one of the world's largest and fastest-growing industries, consuming over 10 percent of gross domestic product of most developed nations. In 2000, health care costs paid to hospitals, doctors, diagnostic laboratories, pharmacies, medical device manufacturers and other components of the health care system, consumed an estimated 14 percent of the gross national product of the United States, the largest of any country in the world. For the G8 countries (eight of the most developed countries in the world) the average is about nine percent.
The sociology of health and medicine is concerned with the distribution of healthcare services globally, in particular inequalities in healthcare, and how conceptions of health have changed over time.
While technology has advanced the practice of medicine and generally improved health, not all people have the same access to health care or the same quality of health care. According to the Health Resources and Services Administration of the U.S., health disparities are the "population-specific differences in the presence of disease, health outcomes, or access to health care" (source). Of particular interest to sociologists are the differences in health and quality of health care across racial, socioeconomic, and ethnic groups.
In the United States, health disparities are well documented in minority populations such as African Americans, Native Americans, Asian Americans, and Hispanics. When compared to European Americans, these minority groups have higher incidents of chronic diseases, higher mortality, and poorer health outcomes. Among the disease-specific examples of racial and ethnic disparities in the United States is the cancer incidence rate among African Americans, which is 10 percent higher than among European Americans. In addition, American adults of African and Hispanic lineage have approximately twice the risk of developing diabetes than American adults of European lineage . Minorities also have higher rates of cardiovascular disease, HIV/AIDS, and infant mortality than whites. There are also substantial differences in health based on social class or socioeconomic status. As of 2009, diseases that were common only in the developed world (e.g., toxocariasis, cysticercosis, chagas, and cytomegalovirus) were becoming a serious concern among economically disadvantaged populations in the U.S.
Causes of Health Disparities
Examples of Health Disparities
As noted in the previous section, disparities in health care are often related to an individual's or a group's ability to pay for health care. This section discusses the costs of healthcare and the different approaches to health care that have been taken by governments around the world.
Private Insurance and Free-Market Health Care
Publicly Funded Health Care
Parallel Public/Private Systems
Difficulties of analysis
The following sections explore some of the ways behaviors and environment can impact human health.
All human societies have beliefs that provide explanations for, and responses to, childbirth, death, and disease. Throughout the world, illness has often been attributed to witchcraft, demons, or the will of the gods, ideas that retain some power within certain cultures and communities (see faith healing). However, the rise of scientific medicine in the past two centuries has altered or replaced many historic health practices.
There is debate about what causes health disparities between ethnic and racial groups. However, it is generally accepted that disparities can result from three main areas:
Most attention on the issue has been given to the health outcomes that result from differences in access to medical care among groups and the quality of care different groups receive. Reasons for disparities in access to health care are many, but can include the following:
There are a number of ways in which health disparities play out based on different systems of stratification. Health disparities resulting from economic stratification are wide-ranging. Poorer women have reduced access to mammograms to detect breast cancer, even when they are better candidates for screening. Wealthier people live longer than poorer people. Wealthier people in the U.S. today live about 4.5 years longer than poorer people (79.2 vs. 74.7, respectively). Additionally, affluent and more educated people are more likely to take advantages of advances in medical science and technology and have seen a more rapid decline in smoking rates, which directly improves health. Disparities in health care between the rich and poor are not inevitable - they are directly correlated with disparities in wealth. The differential and unequal treatment of the rich and poor follows the gap between the rich and the poor: From 1966 to 1980, socioeconomic disparities declined in tandem with a decline in mortality rates, but the disparity has since increased as income stratification has.
Financial access to health care is not the only factor that affects health; poorer people are generally less likely to exercise, resulting in overall poorer health as well. This is probably due to fewer opportunities to exercise and a less well-developed understanding of the importance exercise. Poorer individuals also live in more dangerous neighborhoods and are more likely to eat unhealthy food and engage in risky or unhealthy behavior, all of which contributes to poorer health.
Researchers also find health disparities based on gender stratification. One study found that women are less likely than men to be recommended for knee replacement surgery, even when they have the same symptoms. While it was unclear what role the sex of the recommending physicians played, the authors of this study encourage women to challenge their doctors in order to get care equivalent to men.
Health disparities based on race also exist. Similar to the difference in life expectancy found between the rich and the poor, affluent white women live 14 years longer in the U.S. (81.1 years) than poor black men (66.9 years). There is also evidence that blacks receive less aggressive medical care than whites, similar to what happens with women compared to men. Black men describe their visits to doctors as stressful and report that physicians do not provide them with adequate information to implement the recommendations they are given. Another contributor to the overall worse health of blacks is the incident of HIV/AIDS - the rate of new AIDS cases is 10 times higher among blacks than whites and blacks are 20 times as likely to have HIV/AIDS as are whites.
Health insurance is a type of insurance whereby the insurer pays the medical costs of the insured if the insured becomes sick due to covered causes or accidents. The insurer may be a private organization or a government agency. According to the 2005 United States Census Bureau, approximately 85% of Americans have health insurance. Approximately 60% obtain health insurance through their place of employment or as individuals and various government agencies provide health insurance to 25% of Americans.
While the rising cost of health care is debated, some contributing causes are accepted. Aging populations require more health care as a result of increased life expectancy. Advances in medical technology have arguably driven up the prices of procedures, especially cutting edge and experimental procedures. Poor health choices also increase health care costs by increasing the incident of disease and disability. Preventable health issues are related to:
In theory, people could lower health insurance prices by exercising, eating healthy food, and avoiding addictive substances that are damaging to the body. Healthier lifestyles protect the body from disease, and with fewer diseases, there would be fewer health care related expenses.
Another element of high health care costs is related to the private management of healthcare by large corporations. While this is discussed in greater detail below, it is worth noting that corporate profits have also played a role in increased health care premiums.
Two types of health insurance have developed in modern society: private health insurance (or free-market) models and publicly funded health insurance models. The benefits and drawbacks of each of these models are discussed in this and the following section.
Private insurance refers to health insurance provided by a non-governmental organization, usually a privately owned or publically traded corporation. Private insurance as the primary provider of health care in a developed nation is really only found in the United States. It is important to note that while the United States is the most private of any system, there is a substantial public component. Of every dollar spent on health care in the United States, 44 cents comes from some level of government. In addition, government also increases private sector costs by imposing licensing and regulatory barriers to entry into both the practice of medicine and the drug trade within America. Private practitioners also face inflated costs through the government's use of protectionist measures against foreign companies, to uphold the intellectual property rights of the U.S. pharmaceutical industry.
Advocates of the private model argue that this approach to health care has the following benefits:
Despite these possible benefits, the private insurance approach is not without its drawbacks. Following are some of the more common criticisms of the private health insurance approach to health care:
Concerns about health insurance are prevalent in the United States. A June 2005 survey of a random national sample by the Kaiser Family Foundation found that twice as many United Statesians are more worried about rising health care costs than losing their job or being the victim of a terrorist attack (source).
An alternative to private health insurance and the free-market approach to health care is publicly funded health care. Publicly funded medicine is health care that is paid wholly or mostly by public funds (i.e., taxes). Publicly funded medicine is often referred to as socialized medicine by its opponents, whereas supporters of this approach tend to use the terms universal healthcare, single payer healthcare, or National Health Services. It is seen as a key part of a welfare state.
This approach to health care is the most common and popular among developed (and developing) nations around the world today. The majority of developed nations have publicly funded health systems that cover the great majority of the population. For some examples, see the British National Health Service, medicare Canada and medicare Australia.
Even among countries that have publicly funded medicine, different countries have different approaches to the funding and provision of medical services. Some areas of difference are whether the system will be funded from general government revenues (e.g. Italy, Canada) or through a government social security system (France, Japan, Germany) on a separate budget and funded with special separate taxes. Another difference is how much of the cost of care will be paid for by government or social security system, in Canada all hospital care is paid for by the government while in Japan patients must pay 10 to 30% of the cost of a hospital stay. What will be covered by the public system is also important; for instance, the Belgian government pays the bulk of the fees for dental and eye care, while the Australian government covers neither.
The United States has been virtually alone among developed nations in not maintaining a publicly-funded health-care system since South Africa adopted a publicly-funded system after toppling its apartheid regime. However, a few states in the U.S. have taken serious steps toward achieving this goal, most notably Minnesota. Other states, while not attempting to insure all of their residents strictly speaking, cover large numbers of people by reimbursing hospitals and other health-care providers using what is generally characterized as a charity care scheme, which often includes levies. A 2007 New York Times/CBS Poll found that a majority of Americans support some form of universal healthcare (59%), and just as many Americans support it even if it means raising their taxes. Support is mixed between Democrats, Republicans, and Independents: 63% of Democrats support universal healthcare, 49% of Independents do, and 30% of Republicans do.
Publicly funded medicine may be administered and provided by the government, but in some systems that is not an obligation: there exist systems where medicine is publicly funded, yet most health providers are private entities. The organization providing public health insurance is not necessarily a public administration, and its budget may be isolated from the main state budget. Likewise, some systems do not necessarily provide universal healthcare, nor restrict coverage to public health facilities.
Proponents of publicly funded medicine cite several advantages over private insurance or free-market approaches to health care:
Publicly funded health care is not without its criticisms. Some purported disadvantages of the public system include:
Almost every country that has a publicly funded health care system also has a parallel private system, generally catering to the wealthy. While the goal of public systems is to provide equal service, the egalitarianism tends to be closer to partial egalitarianism. Every nation either has parallel private providers or its citizens are free to travel to a nation that does, so there is effectively a two-tier healthcare system that reduces the equality of service. Since private providers are typically better paid, those medical professionals motivated by remunerative concerns migrate to the private sector while the private hospitals also get newer and better equipment and facilities. A number of countries such as Australia attempt to solve the problem of unequal care by insisting that doctors divide their time between public and private systems.
Proponents of these parallel private systems argue that they are necessary to provide flexibility to the system and are a way to increase funding for the health care system as a whole by charging the wealthy more. Opponents believe that they are allowed to exist mainly because politicians and their friends are wealthy and would prefer better care. They also argue that all citizens should have access to high quality healthcare. The only country not to have any form of parallel private system for basic health care is Canada. However, wealthy Canadians can and travel to the United States for care.
Also, in some cases, doctors are so well paid in both systems that prestige is often more important to them than remuneration. This is very much the case in the United Kingdom where private medicine is seen as less prestigious than public medicine by much of the population. As a result, the best doctors tend to spend the majority of their time working for the public system, even though they may also do some work for private healthcare providers. The British in particular tend to use private healthcare to avoid waiting lists rather than because they believe that they will receive better care from it.
Cost-benefit analysis of healthcare is extremely difficult to do accurately, or to separate from emotional entanglement. For instance, prevention of smoking or obesity is presented as having the potential to save the costs of treating illnesses arising from those choices. Yet, if those illnesses are fatal or life shortening, they may reduce the eventual cost to the system of treating that person through the rest of their life, and it is possible that they will die of an illness every bit as expensive to treat as the ones they avoided by a healthy lifestyle.
This has to be balanced against the loss of taxation or insurance revenue that might come should a person have a longer productive (i.e. working and tax or insurance-paying) life. The cost-benefit analysis will be very different depending on whether you adopt a whole-life accounting, or consider each month as debits and credits on an insurance system. In a system financed by taxation, the greatest cost benefit comes from preserving the working life of those who are likely to pay the most tax in the future, i.e. the young and rich.
Few politicians would dare to present the big picture of costs in this way, because they would be condemned as callous. Nevertheless, behind the scenes, a responsible government must be performing cost analysis in order to balance its budget; it is not likely, however, to take the most purely cost effective route. It may choose to provide the best health care according to some other model, but the cost of this still must be estimated and funded, and there is no uncontroversial definition of best.
In producing a definition of quality of healthcare, there is an implication that quality can be measured. In fact, the effectiveness of healthcare is extremely difficult to measure, not only because of medical uncertainty, but because of intangible quantities like quality of life. This is likely to lead to systems that measure only what is easy to measure (such as length of life, waiting times or infection rates). As a result, the importance of treating chronic, but non-fatal conditions, or of providing the best care for the terminally ill may be reduced. Thus, it is possible for personal satisfaction with the system to go down, while metrics go up.
The Centers for Disease Control and Prevention describes tobacco use as "the single most important preventable risk to human health in developed countries and an important cause of premature death worldwide" (source). A person's increased risk of contracting disease is directly proportional to the length of time that a person continues to smoke as well as the amount smoked. However, if someone stops smoking the risks of developing diseases steadily decline, although gradually, as the damage to their body is repaired.
The main health risks from tobacco smoking pertain to diseases of the cardiovascular system, in particular smoking is a major risk factor for: myocardial infarction (heart attack); diseases of the respiratory tract, such as Chronic Obstructive Pulmonary Disease (COPD) and emphysema; and cancer, particularly lung cancer and cancers of the larynx and tongue. Prior to World War I, lung cancer was considered to be a rare disease, one most physicians would never see during their career. With the postwar rise in popularity of cigarette smoking came a virtual epidemic of lung cancer.
Alcoholism is a dependency on alcohol characterized by craving (a strong need to drink), loss of control (being unable to stop drinking despite a desire to do so), physical dependence and withdrawal symptoms, and tolerance (increasing difficulty of becoming drunk).
Although acceptance of the American Disease Model is not without controversy, the American Medical Association, the American Psychiatric Association, the American Hospital Association, the American Public Health Association, the National Association of Social Workers, the World Health Organization, and the American College of Physicians have all classified alcoholism as a disease.
In a 1992 JAMA article, the Joint Committee of the National Council on Alcoholism and Drug Dependence and the American Society of Addiction Medicine published this definition for alcoholism: "Alcoholism is a primary chronic disease with genetic, psychosocial, and environmental factors influencing its development and manifestations. The disease is often progressive and fatal. It is characterized by impaired control over drinking, preoccupation with the drug alcohol, use of alcohol despite adverse consequences, and distortions in thinking, mostly denial. Each of these symptoms may be continuous or periodic."
Symptoms of a person's dependence on alcohol may include, but are not limited to, a feeling of necessity in regards to consumption of alcohol, or an inability to resist alcohol if offered. Though these symptoms often arise from a physical dependence on the substance, it is not uncommon for individuals, especially teenagers and adolescents between the ages of fifteen and twenty, to rely on alcohol as a means of social interaction. If a person cannot refuse alcohol in the presence of others, insists on drinking alcohol excessively for fear of alienation and neglect, or feels they cannot socially interact with others unless under the influence then this person is considered socially dependent on the substance. These traits can be noticed in individuals who relocate (such as students attending a new university) whereby an individual with no past history of alcohol consumption begins to consume alcohol in order to associate and relate to others. Social dependence, though not physically threatening in early stages, can lead to physical dependence if the person cannot control their urges and more so their reasons for drinking.
The causes for alcohol abuse and dependence cannot be easily explained, but the long-standing, unscientific prejudice that alcoholism is the result of moral or ethical weakness on the part of the sufferer has been largely altered. Recent polls show that 90% of Americans currently believe that alcoholism is, in fact, a disease (source?). Of the two thirds of the North American population who consume alcohol, 10% are alcoholics and 6% consume more than half of all alcohol. Stereotypes of alcoholics are often found in fiction and popular culture. In modern times, the recovery movement has led to more realistic portraits of alcoholics and their problems.
The social problems arising from alcoholism can include loss of employment, financial problems, marital conflict and divorce, convictions for crimes such as drunk driving or public disorder, loss of accommodation, and loss of respect from others who may see the problem as self-inflicted and easily avoided. Exhaustive studies, including those by author Wayne Kritsberg, show that alcoholism affects not only the addicted but can profoundly impact the family members around them. Children of alcoholics can be affected even after they are grown. This condition is usually referred to as The Adult Children of Alcoholics Syndrome. Al-Anon, a group modelled after Alcoholics Anonymous, offers aid to friends and family members of alcoholics.
Effects of fetal alcohol exposure
Obesity is a condition in which the natural energy reserve of humans, which is stored in fat tissue, is expanded far beyond usual levels to the point where it impairs health. While cultural and scientific definitions of obesity are subject to change, it is accepted that excessive body weight predisposes to various forms of disease, particularly cardiovascular disease.
There is continuous debate over obesity, at several levels. The scientific evidence informing these debates is more contradictory than most simple arguments assume. Statistics demonstrating correlations are typically misinterpreted in public discussion as demonstrating causation, a fallacy known as the spurious relationship.
In several human cultures, obesity is (or has been) associated with attractiveness, strength, and fertility. Some of the earliest known cultural artifacts, known as Venuses, are pocket-sized statuettes representing an obese female figure. Although their cultural significance is unrecorded, their widespread use throughout pre-historic Mediterranean and European cultures suggests a central role for the obese female form in magical rituals and implies cultural approval of (and perhaps reverence for) this body form.
Obesity functions as a symbol of wealth and success in cultures prone to food scarcity. Well into the early modern period in European cultures, it still served this role. Contemporary cultures which approve of obesity, to a greater degree than European and Western cultures, include African, Arabic, Indian, and Pacific Island cultures. The implication of these cultural variations in attitudes toward obesity is that ideal body shape is culturally relative. Another illustration of this is the finding that feminists are less likely to consider "ultra-thin" the ideal body image than non-feminists, indicating the culture and philosophy of feminism countermands the effects of popular media. In Western cultures, obesity has come to be seen more as a medical condition than as a social statement.
Various stereotypes of obese people have found their way into expressions of popular culture. A common stereotype is the obese character who has a warm and dependable personality, presumedly in compensation for social exclusion, but equally common is the obese vicious bully. Gluttony and obesity are commonly depicted together in works of fiction. It can be argued that depiction in popular culture adds to and maintains commonly perceived stereotypes, in turn harming the self esteem of obese people. A charge of prejudice and/or discrimination on the basis of appearance could be leveled against these depictions.
Causes of Obesity
Policy Responses to Obesity
Of course, not everyone who consumes alcohol is an alcoholic or is at danger of becoming an alcoholic. In fact, there is some debate as to whether there are some benefits derived from moderate consumption of alcohol. The controversy over moderate drinking is an ongoing debate about the claimed benefit or harm to human health from moderate consumption of alcoholic beverages. Moderate consumption typically means the consumption of 1 to 3 drinks of an alcoholic beverage a day; the number varies with age and gender. There is wide consensus that over-drinking is harmful: alcohol damages human cells and organs such as the brain, liver and kidney.
Many studies show that consumers of up to 3 drinks a day have a 10% to 40% lower risk of coronary heart disease than those who abstain. Rimm et al. predict a 24.7% decrease in the risk of coronary heart disease based upon 30g alcohol/day.
Critics of moderate drinking claim that any benefits are far outweighed by the possible consequences of over consumption and that these benefits can be had by less risky means. Scientists also note that studies supporting the benefits of moderate drinking do not control for other factors, such as lifestyle differences between moderate drinkers and nondrinkers. There is evidence that moderate drinkers are more affluent than nondrinkers and as such lead lifestyles that are more conducive to good health in general.
Research on the effects of moderate drinking is in its early stages. No long term studies have been done and control groups would be difficult to establish because of the many variables. The current medical recommendation regarding being a moderate drinker is that it is "not only meaningless but also irresponsible" to become a moderate drinker if you don't already drink given that the many obvious health hazards of alcohol outweigh "the benefits of alcohol [which] are small and ill-understood" particularly when so many other cardiovascular treatments are available (p. 651).
Fetal alcohol exposure is regarded by researchers as the leading known cause of mental and physical birth defects, surpassing both spina bifida and Down syndrome, producing more severe abnormalities than heroin, cocaine, or marijuana, and is the most common preventable cause of birth defects in the United States (source).
It can cause mental retardation, facial deformities, stunted physical and emotional development, behavioral problems, memory deficiencies, attention deficits, impulsiveness, an inability to reason from cause to effect; a failure to comprehend the concept of time; and an inability to tell reality from fantasy. Secondary disabilities develop over time because of problems fitting into the environment.
Researchers believe that the risk is highest early in the pregnancy, but there are risks throughout because the fetus' brain develops throughout the entire pregnancy. No amount of alcohol, during any trimester, is absolutely safe.
Conventional wisdom holds that obesity is caused by over-indulgence in fatty or sugary foods, portrayed as either a failure of will power or a species of addiction. Various specialists strongly oppose this view. For example, Professor Thomas Sanders, the director of the Nutrition, Food & Health Research Centre at King's College London, emphasises the need for balance between activity and consumption:
Obesity is generally a result of a combination of factors:
Some eating disorders can lead to obesity, especially binge eating disorder (BED). As the name indicates, patients with this disorder are prone to overeat, often in binges. A proposed mechanism is that the eating serves to reduce anxiety, and some parallels with substance abuse can be drawn. An important additional factor is that BED patients often lack the ability to recognize hunger and satisfaction, something that is normally learned in childhood. Learning theory suggests that early childhood conceptions may lead to an association between food and a calm mental state.
While it is often quite obvious why a certain individual gets fat, it is far more difficult to understand why the average weight of certain societies have recently been growing. While genetic causes are central to who is obese, they cannot explain why one culture grows fatter than another.
Although there is no definitive explanation for the recent epidemic of obesity, the evolutionary hypothesis comes closest to providing some understanding of this phenomenon. In times when food was scarce, the ability to take advantage of rare periods of abundance and use such abundance by storing energy efficiently was undoubtedly an evolutionary advantage. This is precisely the opposite of what is required in a sedentary society, where high-energy food is available in abundant quantities in the context of decreased exercise. Although many people may have a genetic propensity towards obesity, it is only with the reduction in physical activity and a move towards high-calorie diets of modern society that it has become so widespread.
The obesity epidemic is most notable in the United States. In the years from just after the Second World War until 1960 the average person's weight increased, but few were obese. In 1960 almost the entire population was well fed, but not overweight. In the two and a half decades since 1980 the growth in the rate of obesity has accelerated markedly and is increasingly becoming a public health concern. There are a number of theories as to the cause of this change since 1980. Most believe it is a combination of various factors:
Some obesity co-factors are resistant to the theory that the epidemic is a new phenomenon. In particular, a class co-factor consistently appears across many studies. Comparing net worth with BMI scores, a 2004 study (Zagorsky 2004) found obese American subjects approximately half as wealthy as thin ones. When income differentials were factored out, the inequity persisted: thin subjects were inheriting more wealth than fat ones. Another study finds women who married into a higher status were thinner than women who married into lower status.
On top of controversies about the causes of obesity, and about its precise health implications, come controversies about the correct policy approach to obesity. The main debate is between personal responsibility advocates, who resist regulatory attempts to intervene in citizen's private dietary habits, and public interest advocates, who promote regulations on the same public health grounds as the restrictions applied to tobacco products. There is some evidence that the public interest approach may help, though it is clearly not the solution to the problem. A recent study found that parents order items for their children from fast food menus that total to just over 100 fewer calories when each item's calories are clearly labeled. This suggests that raising awareness and educating consumers about the calorie content may help reduce the number of calories consumed.
In the U.S., a recent bout in this controversy involves the so-called Cheeseburger Bill, an attempt to indemnify food industry businesses from frivolous law suits by obese clients. On July 16, 2004, the U.S. Department of Health and Human Services officially classified obesity as a disease. Speaking to a Senate committee, Tommy Thompson, the Secretary of Health and Human Services, stated that Medicare would cover obesity-related health problems. However, reimbursement would not be given if a treatment was not proven to be effective.
Folk medicine refers collectively to procedures traditionally used for treatment of illness and injury, aid to childbirth, and maintenance of wellness. It is a body of knowledge distinct from modern, scientific medicine but may coexist in the same culture. It is usually unwritten and transmitted orally until someone collects it. Within a given culture, elements of folk medicine may be diffusely believed by many adults or may be gathered and applied by those in a specific role of healer, shaman, midwife, witch, or dealer in herbs. Elements in a specific culture are not necessarily integrated into a coherent system and may be contradictory. Folk medicine is sometimes associated with quackery when practiced as theatrics or otherwise practiced fraudulently, yet it may also preserve important cultural tradition from the past.
Herbal medicine is an aspect of folk medicine that involves the use of gathered plant parts to make teas, poultices, or powders that purportedly effect cures. Many effective treatments adopted by physicians over the centuries were derived from plants (e.g. aspirin), and botany was an important part of the materia medica of professional medical training before the 20th century. Attention has been paid to the folk medicine of indigenous peoples of remote areas of the world in the interest of finding new pharmaceuticals. Of concern is the extinction of many species by the clearing of formerly wild rainforests. Such activity may lead to the loss of species of plants that could provide aids to modern medicine. Attitudes toward this type of knowledge gathering and plant preservation vary and political conflicts have increasingly arisen over ownership of the plants, land, and knowledge in several parts of the world.
While potentially important as cultural tradition, folk medicine should not be considered a reliable or proven form of healthcare; anyone considering trying folk medicine should always consult with a scientifically trained doctor first.
Alternative medicine describes methods and practices used in place of, or in addition to, conventional or scientific treatments. There is some debate as to what is included under the label "alternative," as these practices include things as far ranging as: spiritual, metaphysical, or religious treatments; Eastern and Asian traditional treatments; and a variety of unproven treatments, like acupuncture, chiropractic, healing-touch, and homeopathy. One popular definition of alternative medicine is provided by Richard Dawkins, an Oxford biologist: "that set of practices that cannot be tested, refuse to be tested or consistently fail tests."
The widespread acceptance of various forms of alternative medicine is a good illustration of a combination of social-psychological ideas, including: fads, scientific illiteracy, the power of marketing in a capitalistic economy, the influence of misinformed politicians on public policy e.g., The National Center for Complementary and Alternative Medicine at the National Institutes of Health and the willingness of people without "alternatives" to turn to unproven therapies when modern medicine informs them there is nothing else that can be done.
Criticisms of Alternative Medicine
Healthcare improves as a result of advancements in medical science. Modern medicine approaches health care from two angles. The first, the medical model, focuses on the eradication of illness through diagnosis and effective treatment. The second, the social model, focuses on changes that can be made in society and in people's own lifestyles to make the population healthier. Modern, scientific medicine has proven uniquely effective at treating and preventing disease. It is increasingly widespread and more widely accepted than other forms of medicine. Modern medicine is notably secular, indifferent to ideas of the supernatural or the spirit, and concentrates on the body and society to determine causes and cures. Modern, scientific medicine is the most effective contributor to the health of humans in the world today.
Some of the criticisms of alternative medicine include:
In addition to there existing various legitimate means of holding power, there are a variety of forms of government.
While there are many more types of government, based on what you've just read, do you think there is a type that is better than the others? If so, why do you think that?
A political party is a political organization that seeks to attain and maintain political power within government, usually by participating in electoral campaigns. Parties often espouse an expressed ideology or vision bolstered by a written platform with specific goals, forming a coalition among disparate interests.
In any political system where voting is allowed, some people are more likely to vote than others (see this Wikipedia article on Voter turnout for more information on this). Additionally, some people are more likely to have access to political power than are others. It is in teasing out the stratification of political participation and political power that the sociological imagination is particularly useful.
Politics and Gender
Politics and Age
Politics and Race
Politics and Class
Political power is a type of power held by a group in a society which allows that group to administrate the distribution of public resources, including labor, and wealth. Political powers are not limited to heads of states, however the extent to which a person or group such as an insurgency, terrorist group, or multinational corporation possesses such power relates to the amount of societal influence they can wield, formally or informally. Power, then, is often defined as the ability to influence the behavior of others with or without resistance.
In government, authority is often used interchangeably with the term "power". However, their meanings differ. Authority refers to a claim of legitimacy, the justification and right to exercise power. For example, while a mob has the power to punish a criminal, for example by lynching, people who believe in the rule of law consider that only a court of law to has the authority to order capital punishment.
Max Weber identified and distinguished three types of legitimate authority.
In most modern nation-states, the government has authority, which gives it power. Intriguingly, the fact that the government has authority gives it the right to use power to force citizens to do what the government deems appropriate. In other words, the government has the right, based on its authority, to force people to behave in certain ways. Refusal to follow the dictates of the government can result in the government using violence to coerce individuals into compliance.
At the same time, the fact that the government of a country has the right to use violence, theoretically a near-exclusive right (others can use violence only when officially sanctioned, such as when one purchases a hunting license or if one belongs to a government sanctioned fighting league like the UFC), reinforces the government's claim to authority. Thus, you have something of a paradox: Do governments have authority if they do not have the right to use violence? And, do governments derive their authority from their right to use violence? Another way to think about this quirk of politics is to ask yourself: Would you follow the law if there were no repercussions to your behavior. While you may for other reasons (e.g., a Hobbesian social contract), ultimately it is the threat of the legitimate use of violence that makes government authority compelling.
A monarchy is a form of government in which supreme power is absolutely or nominally lodged with an individual, who is the head of state, often for life or until abdication. The person who heads a monarchy is called a monarch. It was a common form of government in the world during the ancient and medieval times. There is no clear definition of monarchy. Holding unlimited political power in the state is not the defining characteristic, as many constitutional monarchies such as the United Kingdom and Thailand are considered monarchies yet their monarchs have limited political power. Hereditary rule is often a common characteristic, but elective monarchies are also considered monarchies (e.g., The Pope) and some states have hereditary rulers, but are considered republics (e.g., the Dutch Republic). Currently, 44 nations in the world have monarchs as heads of state, 16 of which are Commonwealth realms that recognise the monarch of the United Kingdom as their head of state.
Democracy is a form of government in which the right to govern or sovereignty is held by the majority of citizens within a country or a state. In political theory, democracy describes a small number of related forms of government and also a political philosophy. Even though there is no universally accepted definition of 'democracy', there are two principles that any definition of democracy includes. The first principle is that all members of the society (citizens) have equal access to power and the second that all members (citizens) enjoy universally recognized freedoms and liberties.
There are several varieties of democracy, some of which provide better representation and more freedoms for their citizens than others. However, if any democracy is not carefully legislated to avoid an uneven distribution of political power with balances, such as the separation of powers, then a branch of the system of rule could accumulate power and become harmful to the democracy itself. The "majority rule" is often described as a characteristic feature of democracy, but without responsible government it is possible for the rights of a minority to be abused by the "tyranny of the majority". An essential process in representative democracies are competitive elections, that are fair both substantively and procedurally. Furthermore, freedom of political expression, freedom of speech and freedom of the press are essential so that citizens are informed and able to vote in their personal interests.
Totalitarianism (or totalitarian rule) is a political system that strives to regulate nearly every aspect of public and private life. Totalitarian regimes or movements maintain themselves in political power by means of an official all-embracing ideology and propaganda disseminated through the state-controlled mass media, a single party that controls the state, personality cults, control over the economy, regulation and restriction of free discussion and criticism, the use of mass surveillance, and widespread use of state terrorism.
An oligarchy is a form of government in which power effectively rests with a small elite segment of society distinguished by royalty, wealth, family, military or religious hegemony. Such states are often controlled by politically powerful families whose children are heavily conditioned and mentored to be heirs of the power of the oligarchy. Oligarchies have been tyrannical throughout history, being completely reliant on public servitude to exist.
A Communist state is a state with a form of government characterized by single-party rule of a Communist party and a professed allegiance to an ideology of communism as the guiding principle of the state. Communist states may have several legal political parties, but the Communist party is usually granted a special or dominant role in government, often by statute or under the constitution. Consequently, the institutions of the state and of the Communist party become intimately entwined, such as in the development of parallel institutions.
While almost all claim lineage to Marxist thought, there are many varieties of Communist states, with indigenous adaptions. For Marxist-Leninists, the state and the Communist Party claim to act in accordance with the wishes of the industrial working class; for Maoists, the state and party claim to act in accordance to the peasantry. Under Deng Xiaoping, the People's Republic of China proclaimed a policy of "socialism with Chinese characteristics." In most Communist states, governments assert that they represent the democratic dictatorship of the proletariat.
Theocracy is a form of government in which a god or deity is recognized as the state's supreme civil ruler, or in a broader sense, a form of government in which a state is governed by immediate divine guidance or by officials who are regarded as divinely guided. Theocratic governments enact theonomic laws. Theocracy should be distinguished from other secular forms of government that have a state religion, or are merely influenced by theological or moral concepts, and monarchies held "By the Grace of God". Theocratic tendencies have been found in several religious traditions including Judaism, Islam, Confucianism, Hinduism, and among Christianity: Catholicism, Eastern Orthodoxy, Protestantism, and Mormonism. Historical examples of Christian theocracies are the Byzantine Empire (A.D. 330-1453) and the Carolingian Empire (A.D. 800-888).
The United States Constitution is silent on the subject of political organizations, mainly because most of the founding fathers disliked them. Yet, major and minor political parties and groups soon arose. In partisan elections, candidates are nominated by a political party or seek public office as an independent. Each state has significant discretion in deciding how candidates are nominated, and thus eligible to appear on the election ballot. Typically, major party candidates are formally chosen in a party primary or convention, whereas minor party representatives and Independents are required to complete a petitioning process.
The complete list of political parties in the United States is vast. However, there are two main parties in presidential contention:
Each of these two parties shares a degree of national attention by attaining the mathematical possibility of its nominee becoming President of the United States - i.e., having ballot access - for its presidential candidate in states whose collective total is at least half of the Electoral College votes.
American political parties are more loosely organized than those in other countries. The two major parties, in particular, have no formal organization at the national level that controls membership, activities, or policy positions, though some state affiliates do. Thus, for an American to say that he or she is a member of the Democratic or Republican party, is quite different from a Briton's stating that he or she is a member of the Labour party. In the United States, one can often become a "member" of a party, merely by stating that fact. In some U.S. states, a voter can register as a member of one or another party and/or vote in the primary election for one or another party, but such participation does not restrict one's choices in any way; nor does it give a person any particular rights or obligations with respect to the party, other than possibly allowing that person to vote in that party's primary elections (elections that determine who the candidate of the party will be). A person may choose to attend meetings of one local party committee one day and another party committee the next day. The sole factor that brings one "closer to the action" is the quantity and quality of participation in party activities and the ability to persuade others in attendance to give one responsibility.
Party identification becomes somewhat formalized when a person runs for partisan office. In most states, this means declaring oneself a candidate for the nomination of a particular party and intent to enter that party's primary election for an office. A party committee may choose to endorse one or another of those who is seeking the nomination, but in the end the choice is up to those who choose to vote in the primary, and it is often difficult to tell who is going to do the voting.
The result is that American political parties have weak central organizations and little central ideology, except by consensus. A party really cannot prevent a person who disagrees with the majority of positions of the party or actively works against the party's aims from claiming party membership, so long as the voters who choose to vote in the primary elections elect that person. Once in office, an elected official may change parties simply by declaring such intent.
At the federal level, each of the two major parties has a national committee (See, Democratic National Committee, Republican National Committee) that acts as the hub for much of the fund-raising and campaign activities, particularly in presidential campaigns. The exact composition of these committees is different for each party, but they are made up primarily of representatives from state parties, affiliated organizations, and other individuals important to the party. However, the national committees do not have the power to direct the activities of individual members of the party.
The map below shows the results of the 2008 Presidential Election in the United States, illustrating the strength of the two major parties varies by geographic region in the U.S., with Republicans stronger in the South, Midwest, and some Mountain states while Democrats are stronger along the coasts.
Sweden has a multi-party system, with numerous parties in which no one party often has a chance of gaining power alone, and parties must work with each other to form coalition governments. A multi-party system is a system in which three or more political parties have the capacity to gain control of government separately or in coalition.
Unlike a single-party system (or a non-partisan democracy), it encourages the general constituency to form multiple distinct, officially recognized groups, generally called political parties. Each party competes for votes from the enfranchised constituents (those allowed to vote). A multi-party system is essential for representative democracies, because it prevents the leadership of a single party from setting policy without challenge.
If the government includes an elected Congress or Parliament the parties may share power according to Proportional Representation or the First-past-the-post system. In Proportional Representation, each party wins a number of seats proportional to the number of votes it receives. In first-past-the-post, the electorate is divided into a number of districts, each of which selects one person to fill one seat by a plurality of the vote. First-past-the-post is not conducive to a proliferation of parties, and naturally gravitates toward a two-party system, in which only two parties have a real chance of electing their candidates to office. This gravitation is known as Duverger's law. Proportional Representation, on the other hand, does not have this tendency, and allows multiple major parties to arise.
This difference is not without implications. A two-party system requires voters to align themselves in large blocs, sometimes so large that they cannot agree on any overarching principles. Along this line of thought, some theories argue that this allows centrists to gain control. On the other hand, if there are multiple major parties, each with less than a majority of the vote, the parties are forced to work together to form working governments. This also promotes a form of centrism.
The United States is an example of where there may be a multi-party system but that only two parties have ever formed government. Germany, India, France, and Israel are examples of nations that have used a multi-party system effectively in their democracies (though in each case there are two parties larger than all others, even though most of the time no party has a parliamentary majority by itself). In these nations, multiple political parties have often formed coalitions for the purpose of developing power blocs for governing.
The multi-party system of proportional representation has allowed a small third party, The Pirate Party, to come to prominence in Sweden, something that would be very unlikely in the United States. The Pirate Party strives to reform laws regarding copyright and patents. The agenda also includes support for a strengthening of the right to privacy, both on the Internet and in everyday life, and the transparency of state administration. The Party has intentionally chosen to be block independent on the traditional left-right scale to pursue their political agenda with all mainstream parties. The Pirate Party is the third largest party in Sweden in terms of membership. Its sudden popularity has given rise to parties with the same name and similar goals in Europe and worldwide.
While women are generally as likely to vote (or even more likely to vote; see figure below) in developed countries, women are underrepresented in political positions. Women make up a very small percentage of elected officials, both at local and national levels. In the U.S., for instance, in the 109th Congress (2005-2007) there were only 14 female Senators (out of 100) and 70 Congressional Representatives (out of 435). This is illustrated in the graph below:
In 2010 things had improved slightly; 17.2% of the House and 17% of the Senate were women, though a substantial imbalance remained between the two political parties.
One of the factors that predicts how people vote is attitudes toward gender equality. U.S. counties with sex segregated occupations are 11% more likely to vote Republican than counties that have mixed-sex occupations. McVeigh and Sobolewski (2007) argue that the white males in sex segregated counties are more likely to vote for conservative candidates because they feel their occupational security is threatened by women and racial minorities.
Young people are much less likely to vote than are older people and are less likely to be politicians. This is illustrated for young people in the U.S. in the graph below (which also illustrates sex differences in voting rates):
The lower voting rates of young people in the U.S. help explain why things like Medicare and Social Security in the U.S. are facing looming crises - the elderly will retain many of the benefits of these programs and are unwilling to allow them to be changed even though young people will be the ones to suffer the consequences of these crises. Older people are also more organized, through organizations like the AARP and are more likely to vote as a block on issues that affect them directly. As a result, older individuals in the U.S. have more power than younger people.
Generally, racial and ethnic minorities are less likely to vote in elections and are also underrepresented in political positions. The graph below illustrates the disparate voting rates between racial/ethnic groups in the U.S. in the 2008 Presidential Election:
Racial and ethnic minorities are also less likely to hold political positions. If blacks were represented in proportion to their numbers in the U.S., there should be 12 Senators and 52 Members of the House. In 2009 there was 1 black Senator (Roland Burris) and 39 Members of the House. In 2010 the number in the House increased slightly to 41 (7.8%), but remained at just 1% of the Senate.
Another way that political power is stratified is through income and education. Wealthier and more educated people are more likely to vote (see figures to the right). Additionally, wealthier and more educated people are more likely to hold political positions. A good illustration of this is the 2004 Presidential Election in the U.S. The candidates, John Kerry, and George W. Bush are both Yale University alumni. John Kerry is a lawyer and George W. Bush has an MBA from Harvard. Both are white, worth millions of dollars, and come from families that have been involved in politics.
The term Collective behavior refers to social processes and events which do not reflect existing social structure (laws, conventions, and institutions), but which emerge in a "spontaneous" way. Collective behavior might also be defined as action which is neither conforming (in which actors follow prevailing norms) nor deviant (in which actors violate those norms). Collective behavior, a third form of action, takes place when norms are absent or unclear, or when they contradict each other. Scholars have devoted far less attention to collective behavior than they have to either conformity or deviance.
Examples of collective behavior include: religious revival meetings (like those depicted in the documentary Marjoe), a panic in a burning theater (e.g., the Kentucky Beverly Hills Supper Club fire), a sudden widespread interest in a website (e.g., MySpace) or clothing item (e.g., wriststrong bracelets), a collective social movement to improve the environment (e.g., Greenpeace), or the rapid spread of rumors (e.g., that Barack Obama is Muslim or not a US citizen). These diverse actions fall within the area sociologists call collective behavior.
Collective behavior differs from group behavior in three ways:
Traditionally, collective behavior in sociology includes four forms: the crowd, the public, the mass, and the social movement. While there is a degree of debate over what should be included under the label of "collective behavior" among sociologists today, often included are additional behaviors like: rumors, riots, trends, and fads.
Aside from the intrinsic interest of understanding why large groups of people behave the way they do, there are practical reasons why the study of collective behavior is important. Two examples might illustrate the practical importance:
"On December 3, 1979, eleven fans were killed by compressive asphyxia and several dozen others injured in the rush for seating at the opening of a sold-out concert by English rock band The Who. The concert was using "festival seating", (also known as "general seating"), where the best seats are available on a first-come, first-served basis. Due to the festival seating, many fans arrived early. When the crowds waiting outside heard the band performing a late sound check, they thought that the concert was beginning and tried to rush into the still-closed doors. Some at the front of the crowd were trampled as those pushing from behind were unaware that the doors were still closed. Only a few doors were in operation that night, and there are reports that management did not open more doors due to the concern of people sneaking past the ticket turnstiles."
Better architectural design and crowd management might have avoided this tragedy. How to redesign buildings and manage crowds are two types of knowledge that can result from the study of collective behavior. Understanding how people behave in riots, what sets them off, and how they can be rapidly concluded is also knowledge that can result from the study of collective behavior. Additionally, understanding how humans react during natural disasters and ensure that the damage that occurs is entirely a result of the disaster and not the human response to it.
Another motivation for studying collective behavior is in order to actually change elements of society. This is the component of collective behavior known as "social movements." Again, an example may help illustrate this point:
"On March 7, 1965, African American leaders led a march of 600 people to walk the 54 miles (87 km) from Selma to the state capital in Montgomery. Only six blocks into the march, however, state troopers and local law enforcement attacked the peaceful demonstrators with billy clubs, tear gas, rubber tubes wrapped in barbed wire and bull whips. They drove the marchers back into Selma. The national broadcast of the footage of lawmen attacking unresisting marchers seeking the right to vote provoked a national response. Eight days after the first march, Lyndon Johnson delivered a televised address to garner support for the voting rights bill he had sent to Congress. In it he stated:
But even if we pass this bill, the battle will not be over. What happened in Selma is part of a far larger movement which reaches into every section and state of America. It is the effort of American Negroes to secure for themselves the full blessings of American life. Their cause must be our cause too. Because it is not just Negroes, but really it is all of us, who must overcome the crippling legacy of bigotry and injustice. And we shall overcome.
Johnson signed the Voting Rights Act of 1965 on August 6. The 1965 act suspended poll taxes, literacy tests and other subjective voter tests. It authorized Federal supervision of voter registration in states and individual voting districts where such tests were being used. The act had an immediate and positive impact for African Americans. Within months of its passage, 250,000 new black voters had been registered. Within four years, voter registration in the South had more than doubled."
Various forms of collective behavior are examined in detail in the following sections.
A crowd is a gathering of people who share a purpose or intent and influence one another. Crowds are a common occurrence in modern life. Most sporting events, concerts, and other performances result in the gathering of crowds. Blumer (1951) differentiated four types of crowds:
When crowd behavior is directed toward a specific, violent end, the result is a mob. Mobs tend to be highly emotional. Examples of mob violence include the lynchings in the Southern U.S. during the 19th and 20th centuries. Violent crowd behavior without a specific goal is a riot. Because riots do not have a specific end, it is assumed that their intention is to express general dissatisfaction.
Collective behavior can also refer to behavior that is diffused or dispersed over large distances. Not all collective behavior has to occur in the immediate vicinity of others (compact crowds). This is especially true with the advent of mass media, which allows for the rapid distribution of information around the world.
Theories of Crowd Behavior
Crowds as "Gatherings"
Panic is a sudden terror which dominates thinking and often affects groups of people. Panics typically occur in disaster situations, such as during a fire, and may endanger the overall health of the affected group. Architects and city planners try to accommodate the symptoms of panic, such as herd behavior, during design and planning, often using simulations to determine the best way to lead people to a safe exit.
A moral panic is a mass movement based on the perception that some individual or group, frequently a minority group or a subculture, poses a menace to society. These panics are generally fuelled by media coverage of social issues (although semi-spontaneous moral panics do occur), and often include a large element of mass hysteria. A moral panic is specifically framed in terms of morality, and usually expressed as outrage rather than unadulterated fear. Though not always, very often moral panics revolve around issues of sex and sexuality. A widely circulated and new-seeming urban legend is frequently involved. These panics can sometimes lead to mob violence. The term was coined by Stanley Cohen in 1972 to describe media coverage of Mods and Rockers in the United Kingdom in the 1960s.
Recent moral panics in the UK have included the ongoing tabloid newspaper campaign against pedophiles, which led to the assault and persecution of a pediatrician by an angry, if semi-literate, mob in August 2000, and that surrounding the murder of James Bulger in Liverpool, England in 1993. (See this page for examples of moral panic.)
Originally proposed by Gustave LeBon (1896), contagion theory proposes that crowds exert a hypnotic influence on their members. The hypnotic influence, combined with the anonymity of belonging to a large group of people, results in irrational, emotionally charged behavior. Or, as the name implies, the frenzy of the crowd is somehow contagious, like a disease, and the contagion feeds upon itself, growing with time. This also implies that the behavior of a crowd is an emergent property of the people coming together and not a property of the people themselves.
There are several problems with LeBon's theory. First, contagion theory presents members of crowds as irrational. Much crowd behavior, however, is actually the result of rational fear (e.g., being trapped in a burning theater) or a rational sense of injustice (e.g., the Cincinnati race riots). Second, crowd behavior is often instigated by and guided by individuals. That the crowd seems to take on a life of its own is certainly true, but the influence of the individual should not be overlooked.
It is also worth noting that LeBon's book is from the perspective of a frightened aristocrat. He interprets the crowd episodes of the French Revolution as irrational reversions to animal emotion, which he sees as characteristic of crowds in general. Blumer sees crowds as emotional, but as capable of any emotion, not only the negative ones of anger and fear.
Convergence theory argues that the behavior of a crowd is not an emergent property of the crowd but is a result of like-minded individuals coming together. In other words, if a crowd becomes violent (a mob or riot), convergence theory would argue that this is not because the crowd encouraged violence but rather because people who wanted to become violent came together in the crowd.
The primary criticism of convergence theory is that there is a tendency for people to do things in a crowd that they would not do on their own. Crowds have an anonymizing effect on people, leading them to engage in sometimes outlandish behavior. Thus, while some crowds may result from like-minded individuals coming together to act collectively (e.g., political rally), some crowds actually spur individuals into behavior that they would otherwise not engage in.
Emergent-Norm Theory combines the above two theories, arguing that it is a combination of like-minded individuals, anonymity, and shared emotion that leads to crowd behavior. This theory takes a symbolic interactionist approach to understanding crowd behavior. It argues that people come together with specific expectations and norms, but in the interactions that follow the development of the crowd, new expectations and norms can emerge, allowing for behavior that normally would not take place.
More recent research into collective behavior has begun to change sociological thinking on crowds. This new approach distinguishes what brings people together as a gathering from what they do once gathered. Most gatherings are temporary and are formed by an assembling process. Individuals who gather are often acquainted and invited or informed about the gathering. Once gathered, the subgroups of individuals who already know one another tend to stay together throughout the gathering. There are, of course, some solitary individuals as well.
This line of research also dispenses with the idea that crowds impair judgment. Alcohol and drugs, which can contribute to deviant behavior, certainly can impair judgment and influence the actions of crowds, but crowds themselves do not impair judgment. The actions of individuals at gatherings also illustrate that individuals remain independent, sometimes responding to solicitations, sometimes ignoring them, sometimes interacting with their subgroup, and sometimes acting spontaneously.
Gatherings also exhibit dispersing processes that end the gatherings. Sometimes these are emergency dispersal, as when authorities arrive and try to end the gathering. Sometimes they are planned or the enthusiasm of the gathering wanes and people simply leave. Perhaps the key point of the "crowds as gatherings" approach is that there is a great deal of variation in gatherings.
A riot is a form of civil disorder characterized by disorganized groups lashing out in a sudden and intense rash of violence, vandalism or other crime. While individuals may attempt to lead or control a riot, riots are typically chaotic and exhibit herd-like behavior. Riots often occur in reaction to a perceived grievance or out of dissent. Historically, riots have occurred due to poor working or living conditions, government oppression, taxation or conscription, conflicts between races or religions, the outcome of a sporting event, or frustration with legal channels through which to air grievances. Riots typically involve vandalism and the destruction of private and public property. The specific property to be targeted varies depending on the cause of the riot and the inclinations of those involved. Targets can include shops, cars, restaurants, state-owned institutions, and religious buildings.
Hysteria is a diagnostic label applied to a state of mind, one of unmanageable fear or emotional excesses. People who are "hysterical" often lose self-control due to the overwhelming fear.
The term also occurs in the phrase mass hysteria to describe mass public near-panic reactions. It is commonly applied to the waves of popular medical problems that everyone gets in response to news articles, such as the yuppy flu of the late 1980s. A similar usage refers to any sort of public wave phenomenon, and has been used to describe the periodic widespread reappearance and public interest in UFO reports, crop circles, and similar examples.
Hysteria is often associated with movements like the Salem Witch Trials, the Red Scare, McCarthyism, and Satanic ritual abuse, where it is better understood through the related sociological term of moral panic.
Mass hysterias can also exhibit themselves in the sudden onset of psychogenic illnesses, or illnesses that are the result of psychology and not an external source (e.g., like a pollutant or infectious agent). A recent example of psychogenic illness resulting from mass hysteria occurred in Jilin, China in 2009 when hundreds of workers at an acrylic yarn factory began to fall ill. Doctors in China determined that, for most of those who fell ill, there were no physical indications of poisoning, which is what the workers claimed caused the illness.
A fad, also known as a craze, refers to a fashion that becomes popular in a culture (or subcultures) relatively quickly, remains popular, often for a rather brief period, then loses popularity dramatically. (See this page for a list of fads.)
A rumor is often viewed as "an unverified account or explanation of events circulating from person to person and pertaining to an object, event, or issue in public concern" (p. 33), though the definition can vary. Rumors generally involve some kind of a statement the veracity of which is not quickly or ever confirmed.
Rumors have three basic characteristics
There are also various types of rumors, including:
As rumors travel they grow shorter, more concise, and more easily grasped. About 70% of details in a message are lost in the first 5 to 6 transmissions. Negative rumors are more likely to be disseminated than positive rumors.
Rumors may be part of a collective explanation process. Bordia and DiFonzo found that 29.4% of statements on archived internet message boards are sensemaking statements that attempt to solve problems. The rest of the discussion revolves around these statements.
Rumors may also be be political communication strategies. Media and particular cultural-historical conditions may facilitate a rumor's diffusion. In 2006, Jayson Harsin introduced the concept of the "rumor bomb" to describe the widespread phenomenon of rumoresque communication in contemporary relations between media and politics, especially within the complex convergence of multiple forms of media, from cell phones and internet, to radio, TV, and print. Harsin treats rumor as a particular rhetorical strategy. For Harsin a "rumor bomb" extends the definition of rumor into a political communication concept with the following features:
Examples of rumors include:
There are several good online resources for checking rumors and urban legends, including:
Berk (1974) uses game theory to suggest that even a panic in a burning theater can reflect rational calculation: If members of the audience decide that it is more rational to run to the exits than to walk, the result may look like an animal-like stampede without in fact being irrational. In a series of empirical studies of assemblies of people, McPhail (1991) argues that crowds vary along a number of dimensions, and that traditional stereotypes of emotionality and unanimity often do not describe what happens in crowds.
Social movements are any broad social alliances of people who are connected through their shared interest in blocking or affecting social change. Social movements do not have to be formally organized. Multiple alliances may work separately for common causes and still be considered a social movement.
A distinction is drawn between social movements and social movement organizations (SMOs). A social movement organization is a formally organized component of a social movement. But an SMO may only make up a part of a particular social movement. For instance, PETA (People for the Ethical Treatment of Animals) advocates for vegan lifestyles along with its other aims. But PETA is not the only group to advocate for vegan diets and lifestyles; there are numerous other groups actively engaged toward this end (see vegan). Thus, the social movement may be a push toward veganism (an effort with numerous motivations) and PETA is an SMO working within the broader social movement.
Modern social movements became possible through the wider dissemination of literature and increased mobility of labor due to the industrialization of societies. Organised social structures like modern day armies, political societies, and popular movements required freedom of expression, education and relative economic independence.
Giddens has identified four areas in which social movements operate in modern societies:
It is also interesting to note that social movements can spawn counter movements. For instance, the women's movement of the 1960s and 1970s resulted in a number of counter movements that attempted to block the goals of the women's movement, many of which were reform movements within conservative religions.
Aberle described four types of social movements based upon two characteristics: (1) who is the movement attempting to change and (2) how much change is being advocated. Social movements can be aimed at change on an individual level (e.g., AA) or change on a broader, group or even societal level (e.g., anti-globalization). Social movements can also advocate for minor changes (e.g., tougher restrictions on drunk driving; see MADD) or radical changes (e.g., prohibition).
Blumer, Mauss, and Tilly have described different stages social movements often pass through. Movements emerge for a variety of reasons (see the theories below), coalesce, and generally bureaucratize. At that point, they can take a number of paths, including: finding some form of movement success, failure, co-optation of leaders, repression by larger groups (e.g., government), or even the establishment of the movement within the mainstream.
Whether these paths will result in movement decline or not varies from movement to movement. In fact, one of the difficulties in studying social movements is that movement success is often ill-defined because movement goals can change. For instance, MoveOn.org, a website founded in the late 1990s, was originally developed to encourage national politicians to move past the Clinton impeachment proceedings (see here). Since that time, the group has developed into a major player in national politics in the U.S. and developed into a Political Action Committee or PAC. In this instance, the movement may or may not have attained its original goal - encouraging the censure of Clinton and moving on to more pressing issues - but the goals of the SMO have changed. This makes the actual stages the movement has passed through difficult to discern.
A variety of theories have attempted to explain how social movements develop. Some of the better-known approaches are outlined below.
Political Process Theory
See Wikipedia's page on NSM theories
Deprivation Theory argues that social movements have their foundations among people who feel deprived of some good(s) or resource(s). According to this approach, individuals who are lacking some good, service, or comfort are more likely to organize a social movement to improve (or defend) their conditions.
There are two significant problems with this theory. First, since most people feel deprived at one level or another almost all the time, the theory has a hard time explaining why the groups that form social movements do when other people are also deprived. Second, the reasoning behind this theory is circular - often the only evidence for deprivation is the social movement. If deprivation is claimed to be the cause but the only evidence for such is the movement, the reasoning is circular.
Mass-Society Theory argues that social movements are made up of individuals in large societies who feel insignificant or socially detached. Social movements, according to this theory, provide a sense of empowerment and belonging that the movement members would otherwise not have.
Very little support has been found for this theory. Aho, in his study of Idaho Christian Patriotism, did not find that members of that movement were more likely to have been socially detached. In fact, the key to joining the movement was having a friend or associate who was a member of the movement.
Structural-Strain Theory proposes six factors that encourage social movement development:
This theory is also subject to circular reasoning as it incorporates, at least in part, deprivation theory and relies upon it, and social/structural strain for the underlying motivation of social movement activism. However, social movement activism is, like in the case of deprivation theory, often the only indication that there was strain or deprivation.
Resource-Mobilization Theory emphasizes the importance of resources in social movement development and success. Resources are understood here to include: knowledge, money, media, labor, solidarity, legitimacy, and internal and external support from power elite. The theory argues that social movements develop when individuals with grievances are able to mobilize sufficient resources to take action. The emphasis on resources offers an explanation why some discontented/deprived individuals are able to organize while others are not.
Some of the assumptions of the theory include:
Critics of this theory argue that there is too much of an emphasize on resources, especially financial resources. Some movements are effective without an influx of money and are more dependent upon the movement members for time and labor (e.g., the civil rights movement in the U.S.).
Political Process Theory is similar to resource mobilization in many regards, but tends to emphasize a different component of social structure that is important for social movement development: political opportunities. Political process theory argues that there are three vital components for movement formation: insurgent consciousness, organizational strength, and political opportunities.
Insurgent consciousness refers back to the ideas of deprivation and grievances. The idea is that certain members of society feel like they are being mistreated or that somehow the system is unjust. The insurgent consciousness is the collective sense of injustice that movement members (or potential movement members) feel and serves as the motivation for movement organization.
Organizational strength falls inline with resource-mobilization theory, arguing that in order for a social movement to organize it must have strong leadership and sufficient resources.
Political opportunity refers to the receptivity or vulnerability of the existing political system to challenge. This vulnerability can be the result of any of the following (or a combination thereof):
One of the advantages of the political process theory is that it addresses the issue of timing or emergence of social movements. Some groups may have the insurgent consciousness and resources to mobilize, but because political opportunities are closed, they will not have any success. The theory, then, argues that all three of these components are important.
Critics of the political process theory and resource-mobilization theory point out that neither theory discusses movement culture to any great degree. This has presented culture theorists an opportunity to expound on the importance of culture.
One advance on the political process theory is the political mediation model, which outlines the way in which the political context facing movement actors intersects with the strategic choices that movements make. An additional strength of this model is that it can look at the outcomes of social movements not only in terms of success or failure but also in terms of consequences (whether intentional or unintentional, positive or negative) and in terms of collective benefits.
Culture theory builds upon both the political process and resource-mobilization theories but extends them in two ways. First, it emphasizes the importance of movement culture. Second, it attempts to address the free-rider problem.
Both resource-mobilization theory and political process theory include a sense of injustice in their approaches. Culture theory brings this sense of injustice to the forefront of movement creation by arguing that, in order for social movements to successfully mobilize individuals, they must develop an injustice frame. An injustice frame is a collection of ideas and symbols that illustrate both how significant the problem is as well as what the movement can do to alleviate it,
A few things we know about injustice frames:
In emphasizing the injustice frame, culture theory also addresses the free-rider problem. The free-rider problem refers to the idea that people will not be motivated to participate in a social movement that will use up their personal resources (e.g., time, money, etc.) if they can still receive the benefits without participating. In other words, if person X knows that movement Y is working to improve environmental conditions in his neighborhood, he is presented with a choice: join or not join the movement. If he believes the movement will succeed without him, he can avoid participation in the movement, save his resources, and still reap the benefits - this is free-riding. A significant problem for social movement theory has been to explain why people join movements if they believe the movement can/will succeed without their contribution. Culture theory argues that, in conjunction with social networks being an important contact tool, the injustice frame will provide the motivation for people to contribute to the movement.
Framing processes includes three separate components:
Despite the many tests that suggest otherwise, there is no correlation between personality and career choice. Thus, there is no particular "personality type" among sociology majors.
The average graduating sociology major is a young female (early 20s) who is white and single. Almost 80% of students graduating with sociology degrees are female, and nearly that many (75%) are white. Less than half of the parents of sociology majors have baccalaureate degrees of their own.
Many sociology majors choose the major because they found their first exposure to the discipline engaging (usually through an introductory course). Most sociology majors chose the discipline because they found the concepts discussed in their course interesting. Only 7% of sociology majors choose the discipline because it appears easy, and relatively few (5%) choose it because the major they wanted was unavailable. In other words, sociology majors find the discipline exciting and challenging and generally enjoy studying sociology.
By the time they graduate, 90% of sociology majors report understanding the basic concepts of the discipline (e.g., social structure, social stratification, etc.). Most also grasp the differences in the theoretical paradigms of sociology (e.g., structural functionalism, conflict theory, etc.). Sociology majors feel confident in the following skills as a result of their degree: identifying ethical issues in research, developing evidence-based arguments, evaluating different research methods, writing reports that are understandable by non-sociologists, forming causal hypothesis, using computers to develop reference lists, interpreting the results of data analysis, and using statistical software (though slightly less than 50% feel confident in their ability to use statistical software).
Sociology majors are generally satisfied overall with their experience earning a sociology degree. Most are also satisfied with the quality of teaching, the accessibility to technology and faculty, their interaction with other majors, and getting the courses they need to graduate. Unfortunately, most students (between 80% and 92%) are dissatisfied with the information provided on graduate school and career opportunities (hopefully this brief chapter will help fill this void).
What can you do with a Baccalaureate degree in Sociology? While it may not seem like it when you are deep into studying race relations or gender stratification, the training you receive as a sociology major is actually quite applicable outside academia as well. College graduates trained in sociology bring unique and valuable insights into business decisions concerning issues surrounding race, ethnicity, gender, age, education, and social class. To give a specific example, a sociology major could help a company like CVS realize the problem with tagging hair care items for African Americans with anti-theft measures while not doing the same for hair care items for whites. Practices like these may seem pragmatic to the managers of these stores, but they also alienate consumers.
College graduates trained in sociology bring many skills to their jobs, including: the ability to recognize trends and patterns, the ability to create concise reports and essays, strong critical thinking skills, oral presentation skills, strong interpersonal communication skills, skills in data collection and analysis using statistical software, grant writing skills, management skills, and planning and organizational skills. All of these skills are potentially invaluable contributions to workplaces.
Most graduating seniors (around 72%) plan on getting a job right out of college. Around 40% plan on going on to graduate school (these paths aren't mutually exclusive of course, you can work while going to graduate school). There are a number of institutions where sociologists find employment, including: schools, churches, hospitals, corporations, government, and social service agencies. For those planning on getting jobs, all of the following are career paths sociology majors are well-suited for: organizational planning, development, and training; human resource management; industrial relations; marketing; public relations; organizational research; and international business.
The Corporate World
Continuing Your Education
|Field of Study||%|
|Applied Sociology, MA and related areas||11.6|
|Masters in Social Work||11.3|
|Doctorate in Sociology||5.7|
|Public Affairs/Public Policy||3.3|
What can you do with a PhD in Sociology? A PhD in Sociology prepares an individual for a variety of career options, including all of those mentioned above for someone with a Bachelor's Degree in Sociology (e.g., marketing, non-governmental organizations, etc.), but it also means you will be much better qualified for these types of jobs. Graduate training in Sociology includes a greater emphasis on research methodology and statistics. It also often includes training in qualitative research methods, like content analysis. Additionally, graduate training includes much more in depth analysis and discussion of many of the topics discussed in your undergraduate training. The format of graduate training may also differ as many of the classes are designed as a seminar rather than a lecture-oriented class - students will come to class meetings having read the assigned readings and then discuss them under the supervision of a professor.
All of this additional training will prepare a PhD in Sociology for advanced careers in fields as diverse as corporate marketing and statistical forecasting to community organizing and lobbying. A PhD in quantitative research and demography could lead to a career at the Census Bureau or the World Bank.
Another obvious occupation one can pursue is as an academic sociologist, working at a college or university. Depending on the type of university or college where one works, the job description will vary substantially. Some colleges - typically four-year liberal arts colleges - focus on teaching. In contrast, research universities - which include PhD programs - focus on research. Thus, what your job description would be as an academic sociologist could range from a heavy emphasis on teaching undergraduates to a heavy emphasis on teaching graduate students. In both types of institutions, there are both tenure and non-tenure track jobs. Tenure refers to a lifetime appointment and carries with it a contractual right not to have the position terminated without just cause. Adjunct instructing is another option, which is usually teaching on a contract basis.
According to Forbes magazine, Sociology is one of the top-ten highest-paying rare jobs in the United States and is generally rated as one of the most satisfying occupations. This may be due to the fact that Sociologists are often engaged in trying to remedy social ills, which can be a very satisfying life pursuit.
According to data compiled in ASA’s 2007 “Beyond the Ivory Tower” report, one-fourth of PhD sociologists work outside academia. Of those in academia, as of April 2008, the average sociology faculty salary was $68,857, ranging from assistant professors making $53,844 to full professor earnings of $87,938.
|Private Industry - For-Profit||6.5|
|Private Industry - Not-For-Profit||8.3|
|Self-Employed and Other||3.4|
Some sociologists find the adaptation of their sociological training and insights to the business world relatively easy. Corporations want and need to understand their customers' habits and preferences in order to anticipate changes in their markets. This drive to understand consumers is called consumer research and is a growing interest of corporations. Sociology majors are particularly well-suited for this type of research as the ultimate goal of empirical sociological research is to predict human behavior. This particular niche may be the single largest opportunity for sociologists in the corporate world.
Another budding area in modern retail firms is site selection, or the determination of the best locations for new stores. Site selection requires understanding human ecology and consumer spending patterns, both of which are addressed using the sociological imagination. Some additional direct applications of sociology include concept and product testing (which will put to good use training in research methods), the evaluating of global market opportunities (which will draw upon understandings of various cultures), long-range planning and forecasting (which draws on both statistics and futurist perspectives), marketing and advertising (which applies consumer studies directly), and human resource management (which relies on studies of organizational behavior).
One way to situate yourself well for the corporate world after earning your Bachelor's degree in Sociology would be to double major in Sociology and Business or minor in Business. This would give you the credentials that hiring departments in the business world are looking for while simultaneously allowing you to focus on Sociology. Another key to succeeding in the corporate world with a degree in Sociology is to market your specific skill set. As noted above, a degree in Sociology provides you with skills that many other college graduates lack. Emphasizing those skills on your resumes and in job interviews increases the odds of you being hired and you using those skills in your job. There is also evidence to suggest that sociology graduates who use their sociological training in their jobs post graduation are more satisfied with their jobs.
Non-Governmental Organizations (or NGOs) are legally constituted organizations created by private persons or organizations with no participation or representation of any government. Examples of NGOs include Oxfam, Catholic Relief Services, CARE International, and Lutheran World Relief. Many NGOs are concerned with the very social problems and social issues sociologists study, from poverty to gender stratification to world population growth. NGOs are ideal opportunities for sociology majors to apply what they have learned in college.
For those planning on continuing their education, the table to the right breaks down the most frequently chosen fields of study for sociology majors:
Many professional degree programs (e.g., medical school and law school) do not, in fact, require someone to have a specific undergraduate degree. Sociology majors can carry the sociological imagination into medical practice, offering sociological insights while practicing medicine. In 2001, sociology majors had a 42% acceptance rate into medical school.
Sociology majors should view their undergraduate education as opening doors for them into many possible fields of work and study. Two factors seems to limit the occupational prospects of sociologists: The first limiting factor is their own imagination. The sociological imagination is applicable to almost every occupation and field of research, from studying how physicists do their work to union organizing. The second limiting factors is that many people do not know what sociologists are trained to do. This should only be a limiting factor up until they meet a sociology major!