logo

60 pages 2 hours read

Mahzarin Banaji, Anthony Greenwald

Blindspot: Hidden Biases of Good People

Nonfiction | Book | Adult | Published in 2013

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Preface-Chapter 4Chapter Summaries & Analyses

Preface Summary

The preface begins with the authors explaining that every vertebrate has a blind spot in their retina called a scotoma. The area does not contain any light-sensitive cells and therefore has no means of transmitting light to the visual areas of the brain. “Paradoxically” (xi), the authors add, you are capable of detecting your own blind spot visually. They provide a grid with two black dots and a plus sign between them so you can see your blind spot in action. When you cover one eye and hold the book around 6 inches from your face, the dot on the side of your open eye disappears. Your brain fills the dot with a continuation of the grid, or “something that made reasonable sense” (xi). An even more extreme condition is known as blindsight, which is when the visual cortex is damaged. Patients with this condition can’t see an object directly in front of them but can still reach for it because certain pathways that dictate visual behavior are intact.

The authors directly compare these visual phenomena to “another type of blindspot” (xii) on which the book focuses: the hidden biases of which we are unaware—like the retinal scotoma in our eyes. These biases are like blindsight in that they can still guide our behavior, even if we’re not aware of the role they play in our lives. The authors define hidden biases as “bits of knowledge about social groups” (xii) that we experience in our everyday lives. They can affect how we treat and act toward others without our knowledge of their influence. The book’s aim is to show that scientists, including the authors, fully recognize the existence of hidden-bias blindspots due to the “sheer weight of scientific evidence” (xiii). However, the authors contend it may be harder to convince readers, who may be unaware that such biases exist within them.

The authors then explain that they met in Columbus Ohia in 1980. Banaji was a Ph.D. student from India who had come to work with Greenwald at Ohio State. Their branch of psychology underwent revolutionary changes in the 1980s in which methods became available to detect “mental content and processes that were inaccessible to introspection” (xiv). The authors wanted to learn if those methods could reveal previously indetectable influences on behavior. Understanding the unconscious mind is still a heavily researched topic that continues to undergo change. Twenty-five years ago, psychologists thought behavior was guided by the conscious mind, but now most agree it’s the opposite and “is produced with little conscious thought” (xiv). The idea of “unconscious cognition” (xiv)—which was replaced in the 1990s with the term “implicit cognition”—is predominant. Current research methods often don’t depend on participants’ reports of their conscious thoughts or behavior.

The book relies heavily on scholarship from the past 80 years but focuses in particular on the work of Gunnar Myrdal, who led the effort that produced the 1944 work An American Dilemma and helped make racial discrimination a national priority, and Gordon Allport, who paved the way for studying prejudice scientifically in the 1954 tract The Nature of Prejudice. The authors explain they try to stick as closely as they can to the evidence and do not assume future research may not eventually complicate their conclusions. However, they feel if they have done their job well, their argument—that even people with good intentions may have hidden-bias blindspots—will hold up for a long time.

Chapter 1 Summary: “Mindbugs”

The chapter begins by discussing a lecture in which a professor shows an illustration of two tables—seemingly different sizes—and exclaims they are in fact the same size. A sketch of a parallelogram that fits over both tabletops quickly quells the skepticism of the students. The authors explain that such a visual illusion that creates errors in the mind’s perception is called a “mindbug” (4), or “ingrained habits” (4) that cause mistakes in reasoning, memory, and behavior. The table illusion, called Turning the Tables, was created by psychologist Roger Shepard. Although the eye sees the tables accurately, it is the brain’s visual cortex that makes the error as it tries to convert a 2-D image into a 3-D one, imposing a “third dimension of depth” (5). The ability for the mind to take this action marks a “triumph” (5) of human adaptation. Even when we learn the tables are the same size, we still see them differently as our sophisticated visual system continues to merge our 2-D retina with our 3-D world.

The illusion demonstrates that much of the mind’s work is done “automatically, unconsciously, and unintentionally” (6). While the idea of the unconscious was brought to popularity by Sigmund Freud, our modern-day understanding of it bears little resemblance to Freud’s theories—in which the unconscious mind dictated nearly every aspect of our lives. Instead, current knowledge of the unconscious was shaped by Hermann von Helmholtz, a 19th-century German physicist and physiologist. Helmholtz coined the term “unconscious inference” (6) as a means of describing how the mind consciously creates perceptions based on the physical surroundings we see. In terms of the table illusion, an unconscious act causes the mind to replace the 2-D retinal image with a “consciously perceived 3-D shape” (6). The authors provide an example of unconscious inference in the form of a picture in which two squares on a checkboard appear to be two different shades of color but are in fact the same. They explain that a mindbug once again causes our perception to fail. They then ask how our body’s systems that serve us so well in one context—like our visual system—fail us in others.

The authors turn to another type of mindbug that creates false memories. They discuss the concept of a “false alarm” (9), in which we remember something that never happened. A test given to students of word recall based on two lists showed that they were more likely to remember words that weren’t there than words that were. In the context of something like a crime, a false alarm could have dire consequences. Elizabeth Loftus, a psychologist at the University of California, Irvine, found that those reporting events of a simulated accident recalled events differently based on the questions they were asked. The influence of information on memory after the fact is a type of mindbug called “retroactive interference” (10), or the “misinformation effect” (10)—a term coined by Loftus. The Innocence Project, which uses DNA testing to help release prisoners who are wrongfully convicted, found that nearly 75% of those later deemed innocent were incarcerated based on faulty eyewitness testimony.

The authors discuss a second type of mindbug called the “availability heuristic” (11), when we assume something comes to mind more easily because it happens more often. If someone asks what happens more frequently, murder or suicide, they will most likely answer murder even though the correct answer is suicide. Murder is given more cultural attention and is more extreme, so it comes to mind more easily. Availability heuristic is closely related to another mindbug: “anchoring” (12). The mind uses a piece of information as a reference point or “anchor” (12) and works from that point forward. An example comes from an experiment in which the researcher found that when asked to write down the last two digits of their Social Security number, students used these numbers as an anchor for determining the price they were willing to pay for unrelated objects. Higher Social Security numbers correlated with higher prices for each object. Like other mindbugs, anchoring can have significant consequences when applied in other contexts, like stock valuation.

Mindbugs can also exist in social situations. For example, studies show we can make significant judgments about people’s personalities just by looking at pictures of them, sometimes within a few seconds and with only trivial bits of information—or sometimes none at all. When making important decisions about individuals, we often look to the social group to which they belong to help guide us. We don’t always view people as individuals but part of a group we’ve made judgments about. This group is not always based on race or ethnicity but can include categories such as age, gender, religion, and sexuality, to name a few. The social group in which we place someone serves as the “contextual cue that generates an unconscious social inference” (16). These social mindbugs can cause us to trust those we shouldn’t and vice versa. They can also negatively affect the actions we take toward ourselves.

While the authors concede that mindbugs are deeply rooted in our ancestry when we lived in small homogenous groups and regularly faced danger, they also argue that times have dramatically changed since then. We are no longer subjected to the demands once placed on our ancestors, and they would not recognize or understand our modern society. It may have once made sense to stay away from those who were different, but now it can be a potentially costly mistake. While we have changed our perception of what is fair and just to a noticeable degree, particularly in recent history, continuing to do so requires recognizing the role mindbugs play in influencing our behavior as well as the wedge they create between our thoughts and actions.

Chapter 2 Summary: “Shades of Truth (Problems with Asking Questions)”

The second chapter begins by exploring the idea of “untruths” (21), or dishonest statements people are not completely aware they are telling. They are not bad-intentioned and exist between “totally unconscious and partly conscious” (21). The hope is to expose their causes and help us realize that we tell these untruths more frequently than we probably believe. The authors provide the example of the question “How are you?” to which you most likely reply you are fine, regardless of how you feel. You may be able to justify your untruth to such a question, but that’s beside the point. The simple fact remains that you are telling an untruth.

The authors explain that these untruths are called “white lies” (22), but there are other types of untruths the authors have designated with other colors. They label as “gray lies” (23) another type of untruth that is “a bit darker” (23). Gray lies are told to “spare one’s own feelings” (23), such as when a person tells someone homeless they have no cash to spare when they actually have money on them. “Colorless lies” (24) are lies people keep from themselves. They have no color because they are “invisible” (24) to the person. People with addiction problems may lie about how much they drink or smoke, even though they feel they are telling the truth about it. This is because their perception of how much they are drinking and smoking is different from the reality. Another term for colorless lies is “self-deception” (25). “Red lies” (25) are told for survival or reproductive benefits. There may potentially be an evolutionary basis for such lies, and there are examples of other species that use deception in the form of mimicry or camouflage for survival and mating purposes. However, human language is a relatively recent development and it’s not conclusive that human lying in order to live longer or reproduce is an evolutionary adaptation.

The final type of lie, “blue lies” (26) are when we believe our response “to be more essentially truthful than the actual truth” (26). An example might be when a student who gets a low grade on a test tells the teacher they did the reading for the test even when they didn’t because the student generally does the reading but circumstances prevented them on this occasion. The response gets at an essential truth, but in the immediate circumstances it’s a lie. The goal of such lies is to “produce favorable regard by others” (27). Social psychologists refer to blue lies as “impression management” (27), and they can cause problems for survey research because participants often give responses they think the researcher wants to hear. Although strategies have been devised to try to “weed out” (28) such participants, impression management can make it particularly difficult for researchers to determine racial prejudice. Studies have shown participants skewing their responses depending on the racial background of the questioner. The authors conclude by revisiting the idea of answering questions about our racial biases and asking if our answers might now be tempered by our new understanding of the unconscious influences explored in the chapter.

Chapter 3 Summary: “Into the Blindspot”

The first psychological studies on race began in the 1920s and 1930s. Research was mainly comprised of interviewing participants, which was not necessarily problematic back then because Americans at that time were much more open about their racial attitudes. Over the course of the 20th century, research methods were revised and no longer relied on simply asking questions. The chapter focuses on a new method developed by author Anthony G. Greenwald in 1994. The authors provide two hands-on tests for the reader to take to see how the method works. The first requires timing oneself sorting a deck of cards, first with hearts and diamonds on the left, spades and clubs to the right, then with diamonds and spades to the left and clubs and hearts to the right. For most people, the second sorting task, which combines the colors of the suits, takes 50% longer than the first, which is organized by color.

The next test asks the reader to time themselves sorting words on four lists into two given categories. The two lists on Test A are headed by the following two categories: “INSECTS or pleasant words” or “FLOWERS or unpleasant words” (36). The two lists on Test B are headed by the categories “FLOWERS or pleasant words” or “INSECTS or unpleasant words” (37). Many people find Sheet B to be easier, meaning they sorted it faster with fewer errors, which “Reveals an automatic preference for flowers relative to insects” (38). Those who found Sheet A easier hold the opposite opinion—that insects are preferable to flowers. Previous participants of the study, all of whom held doctorate degrees, took an average of sixteen seconds longer on sheet A.

These tests are known as Implicit Association Tests (IATs). They depend on the idea that your brain has stored past knowledge and experiences you bring to the table when you complete the sorting tasks. The shared property, such as “goodness or badness,” (39) that potentially links categories, like those seen on IATs, is called “valence” (39) by psychologists. As the authors summarize, “Positive valence attracts and negative valence repels” (39). The “mental glue” (39) that bonds categories in the mind is called “mental association” (39). In 1994, Greenwald administered the first IAT through a computer program he wrote. After conducting it on several people, including himself, he concluded that it might be an effective way of measuring what psychologists call “attitude” (41). Attitudes in psychology are “associations that link things (flowers or insects in this case) to positive and negative valence” (41).

A few months after Greenwald’s 1994 test came a second IAT, which was also the first to address race. The new IAT replaced insects and flowers with famous African Americans and European Americans in the hopes that it would bypass impression management and reveal attitudes toward racial groups. Most importantly, it might reveal hidden biases that a researcher would be unable to detect when asking questions. The authors provide the Race IAT test in the book and welcome the reader to try it. Greenwald discusses the surprise he felt the first time he took it when he realized he was faster at putting white people with pleasant words. While he was excited at the test’s implications for research on racial bias, he was also “personally distressed” (45).

The question then arises as to whether the test means someone who has “an automatic preference for White relative to Black” (46) or vice versa means they are prejudiced. Others who took the test asked the same question, and the authors avoided the question initially by responding that the test only measured “implicit prejudice” (46). They had to be cautious because the test did not measure hostility or hatred, nor did a preference necessarily equate to prejudice or “discriminatory behavior” (47). However, more research since then has shed new light on the subject. The authors now know that 75% of those who take the Race IAT have an “automatic White preference” (47), and subsequent research has shown such a preference is, in fact, an indication of discriminatory behavior.

A research study in 2001 gave test subjects the Race IAT, then analyzed two subsequent videotaped interviews, first with a Black interviewer, then with a white one. The tapes were coded for nonverbal cues that indicated either comfort and friendliness with the interviewer, or discomfort and coolness. The study showed that those who had a white preference on the IAT were less friendly with the Black interviewer. However, such a finding was not enough to conclude someone was prejudiced, but other studies soon followed. By 2007, there were 32 studies that involved administering the Race IAT with another measure of discriminatory behavior. Collectively, they showed that white preference on the Race IAT “predicted racially discriminatory behavior” (49). The overall conclusion from the meta analysis of these studies was that the Race IAT “correlated moderately” (50) with discriminatory behavior. Correlation is a numerical value that generally ranges from 0, which means no correlation, to .5 or greater, which means there is a large correlation. A value of 1 would mean the Race IAT exactly predicted discriminatory behavior. The correlation between the Race IAT and discriminatory behavior was .24, putting it in the range of medium. This moderate correlation indicates that the Race IAT is a better predictor of discriminatory behavior than other types of research methods, particularly asking questions. Although the research has not yet included examining or measuring acts of overt or aggressive racism, the Race IAT clearly reveals a mindbug—in this case hidden race bias—that can moderately predict discriminatory behavior.

Chapter 4 Summary: “Not That There’s Anything Wrong with That!”

The chapter is titled for a line in an episode of the sitcom Seinfeld, in which main character Jerry and his friend George pretend to be gay in front of an eavesdropping woman. The woman turns out to be a reporter, who threatens to out Jerry. The two men then try to convince the reporter they aren’t gay, adding “Not that there’s anything wrong with that!” (53). The authors conclude the opening paragraphs by explaining the episode is somewhat indicative of how many people currently feel about homosexuality: there’s nothing wrong with it, but the need to deny it suggests the belief that there might be something wrong with it.

The scene in the episode also demonstrates what psychologists consider to be the “two systems that characterize the mind: reflective and automatic” (54). Jerry’s reflective or conscious side likes gay people and doesn’t feel there’s anything wrong with homosexuality. However, Jerry’s automatic side is tied to a society that in the past viewed homosexuality in a negative light, and as a result he may feel uneasy about being seen as gay. The line “Not that there’s anything wrong with that!” (54) captures the tension between the two sides of Jerry’s mind. The reflective side of our brains is the conscious side that we are familiar with. Our explicit preferences exist in this side of the mind, such as religious or political beliefs. The automatic side is “a stranger to us” (55) and sometimes drives us to actions we don’t understand. For example, a man goes to buy a car that’s sensible for his growing family and leaves with a sports car. These actions sometimes subvert rational choices and are driven by “impulse and intuition” (56).

The IAT is a way to expose the difference between the reflective and automatic sides of the mind. The IAT reveals that people may even have biases against the groups to which they belong. The word psychologists use to explain this gap is “dissociation” (57), which is defined as “the occurrence, in one and the same mind, of mutually inconsistent ideas that remain isolated from one another” (58). The disconnect of ideas from one another, whether conscious or unconscious, in the mind is known as “dissonance” (58). In the mid-1950s, psychologist Leon Festinger coined the term “cognitive dissonance” (59), which suggests that an awareness of the mind’s conflicts “violates the natural human striving for mental harmony” (59). Taking the IAT and getting an unexpected result can lead to feelings of dissonance. People want to feel as if they know their own minds and may have difficulty accepting that there are forces influencing our thoughts that are beyond our awareness. However, many people would rather know about their mental blindspots to potentially take action to correct them.

Many leading psychologists agree that our mind mostly works unconsciously. Our automatic thoughts help us to navigate the world around us and direct us away from situations where we have to feel dissonance because a divide exists between it and our reflective mind. People with amnesia are able to give a good sense of how the automatic side of the brain works because their memory is compromised. In one study, participants who had amnesia looked at photographs of two men and were told one was good and one was bad, along with many other details about them. Later, when they were asked to recount information about the men, none could remember anything other than who was good and who was bad. This information “turned into an impression” (64) that stayed with the patients, even though they forgot everything else. The “good/bad judgment” (64) was automatic rather than reflective. A less dramatic example of the same type of dissociation at work can be found when you can’t remember the details of a movie you saw, but you know if you loved or hated it.

The authors then explain that laughter can reveal automatic thoughts, or our hidden biases. Sometimes we have a conflicting response to a joke we find offensive—laughing first, then frowning. A study found that the Race IAT was a good indicator of how much someone laughed at jokes involving Black stereotypes. Those who had a stronger white preference laughed more at such jokes. The same was true of a gender IAT and sexist humor.

The authors also explain that an age IAT shows that many people hold negative biases toward the elderly. This is surprising, as the elderly are often viewed positively, and there is not a lot of overt anti-elderly sentiment. However, the test found that 80% of Americans more strongly associate young with good than old with bad. They have a “strong automatic preference for the young over the old” (67). Ageism, according to the authors, is “one of the strongest implicit biases we’ve detected across dozens of studies over fifteen years” (67). The age IAT indicates that even the elderly themselves have a preference for the young. The authors posit that the number of negative stereotypes of the elderly in the culture—including movies, television, and advertising—may be part of the reason for ageism. They explain that our culture clearly influences the way we think, whether we want it to or not. Another study demonstrated how the elderly might reconcile their own preference toward the young. It found that the older participants of the study actually self-identified as young. They avoided dissonance by simply not labeling themselves as old.

The authors conclude the chapter by reiterating that dissonance can cause sadness because it complicates images we might have of ourselves as “fair-minded and egalitarian” (69). However, the aim of the Race IAT is to bring self-awareness in the hopes that such knowledge can bring about change. The reflective side of the brain is fully capable of overruling the automatic side. The authors argue that any depression one feels about one’s own biases is useful, as those biases can now be acknowledged and overcome.

Preface-Chapter 4 Analysis

In the preface, the authors introduce the idea of mental blindspots (xii), arguing that they are like visual blind spots but in our minds. They are the thoughts of which we are not necessarily cognizant but that play an important role in our behavior. In Chapter 1, the authors move on to the idea of “mindbugs” (3), which reside in our mental blindspots. Mindbugs are errors in judgment based on “ingrained habits of thought” (4). After demonstrating in Chapter 2 how we are capable of regularly lying to ourselves and others without always realizing it, the authors discuss hidden biases—which can be detected through the Implicit Association Test (IAT)—introduced in Chapter 3. Hidden biases are tied to the automatic or unconscious side of the mind, which is distinguished from the reflective or conscious side in Chapter 4.

The overarching theme of the preface and early chapters is that the unconscious mind is much more powerful than we may recognize. The concept of mental blindspots is entirely dependent upon the idea that our unconscious or automatic mind plays an influential role in our behavior. Otherwise, possessing mental blindspots wouldn’t matter at all. Instead, due to the strength of our unconscious, we may act in certain ways automatically—without realizing why. The authors rely on previous research as well as the IAT to support their claims. They assert that our unconscious “preferences steer us toward less conscious decisions” (55), sometimes leading to “costly lapses in judgment” (55). The authors explain that the idea of a powerful unconscious mind marks a recent shift in the field of psychology—away from previously accepted notions that the conscious mind dictated nearly all of our actions and behaviors. The book directly reinforces that shift.

The authors also suggest in these early chapters that the unconscious mind exists in constant tension with our conscious awareness. They provide a sample of the Race IAT in which most people score an “automatic White preference” (47). According to the authors, many of these same people also espouse egalitarian values. Their hidden biases “reveal a particular disparity in us: between our intentions and ideals, on one hand, and our behavior and actions, on the other” (20). Their unconscious judgments stand in direct conflict with their conscious beliefs. The tension and resulting discomfort created by this conflict is known as “cognitive dissonance” (59). To achieve “mental harmony” (59), we must overcome this tension by aligning our beliefs with our actions. Possessing hidden biases directly inhibits this alignment.

There is a tone of encouragement in these chapters, as the authors attempt to help the reader recognize their own hidden biases. Understanding that we possess constant tension between our automatic and reflective thoughts takes us closer to resolving that tension. The book’s theme of self-awareness is introduced, as the authors suggest that knowledge is power. They provide sample IATs and encourage the reader to take them. Most people, the authors argue “would rather know about the cracks in their own minds” (60). They want to be “certain that their automatic, unconscious thoughts do not result in actions that conflict with their reflective, rational side” (960). The more we know about our innermost thoughts, the more we can work to quell those we consciously do not agree with.

blurred text
blurred text
blurred text
blurred text