The Educational Benefits of Purpose

What are the biggest impediments for teachers in the classroom? According to a recent national survey, the most frequently cited problem was “students lack of interest in learning. (Among teachers in high-poverty schools, 76 percent said this was a serious issue.) These kids know what they need to do - they just dont want to do it.

One solution to this problem is to make classroom activities less tedious. Students might be bored by the periodic table, but get excited about the chemistry of cooking. Statistics is dry; the statistics of baseball is not. In other words, the same student who appears unmotivated when staring at a textbook might be extremely motivated when the material is brought to life by a charismatic teacher.

But this approach has its limitations. For one thing, the interests of students are idiosyncratic; the spin that appeals to one child is tiresome to another. In addition, some academic tasks are inherently difficult, requiring large doses of self-control. It shouldn't be too surprising, then, that 44 percent of middle-school students would rather take out the trash than do their math homework. Not every subject can be gamified. Not everything in life is fun.

So how do we help students cope with these "boring but important" tasks? That question is the subject of a fascinating new paper in the Journal of Personality and Social Psychology by David Yeager, Marlone Henderson, David Paunesku, Gregory Walton, Sidney D'Mello, Brian Spitzer and Angela Duckworth. The researchers began with the observation that, when adolescents are asked about their reasons for doing schoolwork, they often describe motives that are surprisingly selfless, or what the scientists call self-transcendent. If a student wants to become a doctor, she doesnt just want to do it for the money she probably wants to save lives, too.

While previous research has documented the benefits of self-transcendent motives among employees in unpleasant jobs hospital orderlies, sanitation workers and telemarketers all perform better when focused on the noble purpose of their work Yeager, et. al. wanted to extend this logic to the classroom. It was not an obvious move. Its easy to say 'cleaning up this trash helps people,'" wrote first author David Yeager in an email. "It's harder to say that learning fractions helps people...It wasnt clear than any kid would say that, or that it would be motivating.

The first study involved 1364 high-school seniors at ten urban public high schools, scattered across the country. The students were asked to rate, on a five point scale, whether or not they agreed with a series of statements about their motives for going to college. Some of the motives were self-transcendent ("I want to learn things that will help me make a positive impact on the world"), while others were more self-oriented ("I want to learn more about my interests.")  

After giving the students a bevy of self-assessment surveys, it became clear that self-transcendent motives were correlated with a variety of other mental variables, such as self-control and grit. As the scientists note, an important element of self-regulation is the ability to abstract up a level, so that one understands the larger purpose of a trying task. (If you dont want to eat the marshmallow, think about your diet; if youre trying to stay focused on your homework, contemplate your future career goals.) Whats more, this boost in self-regulation had real consequences, allowing the scientists to find a strong link between measures of purpose and college enrollment. Among those students with the least self-transcendent purpose, only 30 percent were actively enrolled in a college the following school year. That percentage more than doubled, to 64 percent, among students with the most purpose.

In addition to these survey questions, the scientists gave the students a new behavioral test called the "diligence task." What makes the task so clever is the way it mirrors the real-world temptations of the digital age, as students struggle to balance the demands of homework against the lure of YouTube. In the task, students were given the choice of completing tedious math problems or watching viral videos/playing tetris. While the students were free to do whatever they preferred, they were also reminded that successfully completing the math tasks could help them stay prepared for their future careers. Not surprisingly, those who reported higher levels of self-transcendent purpose were more diligent, less likely to be tempted by mindless distraction.  As the psychologists note, these results contradict conventional stereotypes about the best way to motivate low-income students. "Telling students to focus on how they can make more money if they go to college may not give them the motives they need to actually make it to college graduation," they write. Instead, these students seem to benefit the most from having selfless motives.

This research raises the obvious question: can self-transcendent purpose be taught? In their second study, Yeager, et. al. conducted an intervention, attempting to instill students with a more meaningful set of motives. They asked 338 ninth graders at a suburban high-school in San Francisco Bay area to complete a reading and writing exercise during an elective period. Half of the students were assigned to the self-transcendent purpose condition, which was designed to get them to think about their selfless motives for learning. One student wrote about wanting to become a geneticist, so they could "help improve the world by possibly engineering crops to produce more food," while another student wanted to become an environmental engineer "to be able to solve our energy problems."

The remaining students were assigned to a control condition. Instead of thinking about how to make the world a better place, these students were asked to read and write about how high-school was different than middle-school.

The intervention worked. After three months, those students with lower math and science grade point averages who were exposed to the purpose intervention saw their GPAs go up by a significant 0.2 points. (Higher achieving students also saw a slight boost in GPA, but it wasn't statistically significant.) Although the intervention only lasted for part of a single class period, it nevertheless led to a lasting boost in academic performance.

The last two studies tried to unpack this effect. After priming undergraduates to think about the self-transcendent purpose of their schoolwork, the students were asked to engage in a tedious academic exercise. They were given 100 review questions for an upcoming psychology test and encouraged to learn deeply from the activity, which meant spending plenty of time working through each question. The results were clear: students exposed to a self-transcendent purpose intervention spent nearly twice as long (49 seconds versus 25 seconds) on each review question. Importantly, this was done in a naturalistic setting, write the scientists. That is, [it involved] looking at real world student behavior on an authentic examination review, when students were unaware that they were in a random-assignment experiment. Not surprisingly, this additional effort led to higher grades on the ensuing exam.

In a final experiment, the scientists demonstrated that a purpose intervention could increase performance on the diligence task, in which students are asked to choose between a tedious math exercise and vapid viral videos. Once again, a sense of purpose proved useful, as those primed to think of selfless reasons for schooling were better at persisting at the math task, even when it was most boring. “We just don’t often ask young people to do things that matter,” wrote David Yeager by email. “We say, ‘Be selfish for now, later when you’re an adult then you can do something important.’ But kids are yearning right now to have meaning in life.”

In the paper, the scientists quote Viktor Frankl, the psychiatrist and pioneer of logotherapy, on the importance of having a meaning in life. (I wrote about Frankl here.) “Ever more people have the means to live, but no meaning to live for,” Frankl wrote, in a critique of modern life. Society excelled at satisfying our physical wants, but it tended to ignore those spiritual needs that couldn’t be measured in a lab or sold at a store.  This was a tragic error, Frankl said, for it led us to misunderstand our most fundamental nature. “A human being…doesn’t care primarily for pleasure, happiness, or for the condition within himself,” Frankl wrote. “The true sign and signature of being human is that being human always points to and is directed towards something other than itself.”

I have a feeling Frankl would have enjoyed this paper. His critics frequently accused him of deliberate ambiguity, of remaining obscure about what “meaning” actually meant. And the critics had a point: there is no pill that can give us purpose, and it’s often unclear what a therapist can do to help a patient discover his or her reason for being. In the absence of empirical evidence – his own life was his best proof - Frankl was forced to rely on aphorisms, such as this one from Nietzsche: “He who has a why to live can bear with almost any how.”

And that’s why I think Frankl would have found these new experiments and interventions so interesting. They are reminder that meaning matters and that its impact can be measured; an intangible sense of purpose comes with tangible benefits. Again and again, we underestimate ourselves, assuming we are selfish and shallow, driven to succeed by the fruits of success. But this research proves otherwise, showing that teenagers are capable of working for selfless goals. In fact, such goals are what make them work the hardest. Because they have a why, the how takes care of itself.

Yeager, David S., et al. "Boring but Important: A Self-Transcendent Purpose for Learning Fosters Academic Self-Regulation." Journal of Personality and Social Psychology. October 2014

The Virtues of Hunger

My kitchen cupboards are filled with Trader Joe’s snacks that I bought while shopping on an empty stomach. Chocolate edamame. Pumpkin spiced pumpkin seeds. Kale chips. Lentil chips. Veggie puffs. A medley of pretzels. A collection of trail mixes. You don’t have to be Daniel Kahneman to realize that shopping while hungry is a hazardous habit, since everything looks so damned delicious. Because we are in a so-called “hot” emotional state, we end up making impulsive decisions, buying stuff that we’ll eat on the car ride home and then never again.

And it’s not just the grocery store. Dan Ariely and George Loewenstein famously demonstrated that making male subjects sexually aroused – they showed them an assortment of erotic images – sharply increased their willingness to engage in “morally questionable behavior,” such as “encouraging a date to drink to increase the chance that she would have sex with you.” It also made them less interested in using a condom.

So the science seems clear: hot emotional states are dangerous. They make us eat the marshmallow, forgo the condom, take out the subprime loan. When making a decision, it’s always better to be calm, cool and sated.

Or not.

A new paper by Denise de Ridder, Floor Kroese, Marieke Adriaanse and Catharine Evers at Utrecht University concludes that, for a certain kind of difficult strategic decision, it’s actually better to be hungry.  One possible explanation for this effect is that hunger triggers a “hot” emotional state, making us more dependent on the urges of instinct. We are less reasonable and rational, and that’s a good thing.

The Dutch researchers describe three separate experiments, all of which had relatively small sample sizes. The first experiment features the Iowa Gambling Task (IGT), a game in which subjects are given four separate decks of cards. Each of the cards leads to either a monetary gain or loss of different amounts. The subjects were told to draw from the decks and to make as much money as possible. 

But here’s the catch – not all of the decks are created equal. Two of the decks (A and B) are full of high-risk cards. They contain larger gains ($100), but also some very punishing losses (between $150 and $1250.)  In contrast, decks C and D are relatively conservative. They have smaller payoffs, but also smaller punishments. The end result is a striking contrast in the total value of the decks: while A and B lead to an average negative return of $250 for every ten drawn cards, C and D lead to an average positive return of $250. The question of the IGT is how long it takes players to figure this out.

The novelty of this study was the introduction of the hunger variable. While all of the subjects were told to not eat or drink anything (except water) from 11 PM in the evening until the morning experiment, those in the sated condition were offered a nice breakfast before playing the card game.

The results were surprising, as hungry subjects performed significantly better on the IGT. Among the final sixty trials, those with an empty stomach drew approximately 30 percent more cards from the “advantageous decks” than those who’d just eaten. According to the scientists, the advantage of hunger is that it makes us more sensitive to the urges of emotion. As Antoine Bechara, Antonio Damasio and colleagues demonstrated in their initial studies of the IGT, it only takes about ten cards before the hands of subjects start getting “nervous” – their palms begin to sweat - whenever they reached for the bad decks. (The scientists refer to this as the “pre-hunch” phase.) However, it took about eighty cards before the subjects could explain the nervousness of their hands, and “conceptualize” the differences between the decks. In other words, the feelings generated by the body preceded their conscious decisions. The hand led the mind.

And that’s why hunger might be useful, at least when it comes to the IGT. “We argue that these benefits from being in a hot state result from a greater reliance on emotions that allow for a better recognition of risks that go hand in hand with big rewards,” write de Ridder, et. al. “This would imply that insofar [as] hot states make people more impulsive, impulsivity means that they act swiftly and without explicit deliberation.”

In a follow-up experiment, the Dutch scientists engaged in a more subtle manipulation of hunger. Instead of not feeding subjects, they randomly divided fifty students into two groups. The first group was asked to evaluate a series of snack foods according to their desire to eat it: “To what extent do you feel like having [snack food] at this moment?” The second group, meanwhile, was asked to evaluate the snacks in terms of their price, or whether they seemed cheap or expensive. Once again, those primed to feel hot emotions – the subjects asked to think about their appetites – performed significantly better on the IGT.

The last study investigated a different sort of decision. Instead of playing cards, subjects were given a series of questions about whether they wanted a small reward right away or a larger reward at a later date. (“Would you prefer $27 today, or $50 in 21 days?”) This is known as a delay-discounting task, and it’s a standard tool for measuring the impulsivity of people. Previous work has shown that hot-emotional states lead to less self-control, which is why I bought chocolate edamame at Trader Joe’s and those aroused undergrads were more willing to have unprotected sex. However, the Dutch psychologists found that those students not given breakfast – they were still hungry – were actually better at choosing long-term profit over immediate gratification. Their hot emotional state made them more patient and reasoned, at least when it came to finding the optimal level of delay.

This doesn’t mean that we can walk around the world looking at pornography and expect instant wisdom. Nor will a skipped breakfast turn us into Warren Buffett. However, when we are faced with a difficult and overwhelming decision – one in which our feelings know more than we do - then mental states that makes us more sensitive to our feelings might lead to better choices. In short, it’s not the simple stuff, like shopping in a grocery store, that benefit from our hottest emotions – it’s the hard stuff. It’s drawing from decks of cards we barely understand, or playing chess, or trying to figure out what we most want from life. That's when you want to be listening to the urges of your body. That’s when the hunger helps.

de Ridder, Denise, et al. "Always Gamble on an Empty Stomach: Hunger Is Associated with Advantageous Decision Making." PloS one 9.10 (2014): e111081.

Learning To Be Alone

By any reasonable standard, human beings are born way too soon, thrust into a world for which we are not ready. Not even close.

The strange timing of our birth reflects the tradeoffs of biology. Humans have a big brain. This big brain comes with obvious advantages. But it also leads to a serious design problem: the female birth canal, which shrank during the shift to bipedalism, is too narrow for such a large skull.

This is known as the obstetrical dilemma. Natural selection solved this dilemma in typically ingenious fashion: it simply had human babies enter the world before they were ready, when the immature central nervous system was still unable to control the body. (As the development psychologist David Bjorklund notes, if human infants “were born with the same degree of neurological maturity as our ape relatives, pregnancy would last for 21 months.”) The good news is that such premature births reduce the risk to the mother and child. The bad news is that it means our offspring require constant care for more than a decade, which is roughly twice as long as any other primate.

Such care is grueling; there’s no use pretending otherwise. Hillard Kaplan, an anthropologist at the University of New Mexico, estimates that it takes approximately 13 million calories to raise a child from birth to independence. That’s a lot of food and a lot of diapers.

But childcare is not just about the feeding and shitting and sleeping. In fact, taking care of the physical stuff ends up being the easy part. As every parent knows, what’s much harder is dealing with the emotional stuff, that whirligig of moods, desires and tantrums that define the immature mind. The world fills us with feelings, but kids don’t know how to cope with these feelings. We have to show them how.

In a new paper published in Psychological Science, a team of researchers led by Dylan Gee and Laurel Gabard-Durnam (lead authors) and Nim Tottenham (senior author) outlined the neural circuits underlying this emotional education. Although there is a vast amount of research documenting the importance of the parent-child bond – secure attachments in childhood are associated with everything from high school graduation rates to a lower-risk of heart-disease as an adult – the wiring behind these differences has remained unclear.

The main experiment involved putting 53 children and teenagers, ranging in age from four to seventeen, into an fMRI scanner. (To help the younger kids tolerate the confined space, the scientists had them participate in a mock session before the experiment. They also secured their head with a bevy of padded air pillows.) While in the scanner, the children were shown a series of photographs. Some of the pictures were of their mother, while other pictures were of an “ethnicity matched” stranger. The subjects were instructed to press a button whenever they saw a smiling face, regardless of who it was.

When analyzing the fMRI data, the scientists focused on the connection between the right side of the amygdala and the medial prefrontal cortex (mPFC). Both of these are promiscuous brain areas, “lighting up” in all sorts of studies and all kinds of tasks. However, the scientists point out that the right amygdala is generally activated by stress and threats; it’s a warehouse of negative emotion. The mPFC, in contrast, helps to modulate these unfortunate feelings, allowing us to calm ourselves down and keep things in perspective. When a toddler dissolves into a tantrum because she doesn’t want to wear shoes, or go to bed, or eat her broccoli, you can blame her immature frontal lobes, which are still learning how to control her emotions. Kids are mostly id: this is why.

Here’s where things get interesting. For children older than ten, there was no significant difference in right amygdala/mPFC activity when they were flashed pictures of their mother versus a stranger. For younger children, however, the pictures of the mother made a big difference, allowing them to exhibit the same inverse connection between the amygdala and the mPFC that is generally a sign of a more developed mind. The scientists argue that these changes are evidence of “maternal buffering,” as the mere presence of a loving parent can markedly alter the ways in which children deal with their feelings. Furthermore, these shifts in brain activity were influenced by individual differences in the parent-child relationship, so that children with more secure attachments to their mother were more likely to exhibit mature emotional regulation in her presence. As John wrote in the Gospels, “Perfect love casts out fear.” Put more precisely, perfect love (and what’s more perfect than parental love?) allows kids to modulate the activity in the right amygdala, and thus achieve an emotional maturity that they are not yet capable of on their own.

While Gee, et. al provide new clarity on the wiring of this developmental process, scientists have known for decades that the process itself is exceedingly important. Although we tend to think of the human body as a closed-loop system, able to regulate its own homeostatic needs, the intricacies of the parent-child relationship reveal that we’re actually open-loops, designed to be influenced by the emotions of others. Children, in fact, are an extreme example of this open-loop system, which is why not experiencing parental buffering in the first few years of life can be such a crippling condition. Born helpless, we require an education in everything, and that includes learning to tamp down the shouts of the subterranean brain.

The child psychiatrist Donald Winnicott once observed that the goal of a parent should be to raise a child capable of being alone in their presence. That might seem like a paradox, but Winnicott was pointing out that one of the greatest gifts of love is the ability to take it for granted, to trust that it is always there, even when it goes unacknowledged. In Winnicott’s view, the process of maturity is the process of internalizing our attachments, so that the child can “forgo the actual presence of a mother or mother-figure.”

This study is a first step to understanding how this internalization happens. It shows us how the right kind of love marks the brain, how being attached to someone else endows children with a newfound maturity, a sudden strength that helps them handle a world full of scary things.

Gee, Dylan G., et al. "Maternal Buffering of Human Amygdala-Prefrontal Circuitry During Childhood but Not During Adolescence." Psychological Science (2014): 0956797614550878.

 

 

The Spell of Art

In the preface to Dave Eggers' 2000 memoir, A Heartbreaking Work of Staggering Genius, he makes the reader a generous offer. If we are bothered by the dark truth of the work - it is a book set in motion by the near simultaneous death of his parents - then we are free to pretend it's not true at all. In fact, Eggers will even help us out:

"If you are bothered by the idea of this being real, you are invited to do what the author should have done, and what authors and readers have been doing since the beginning of time: PRETEND IT’S FICTION. As a matter of fact, the author would like to make an offer...If you send in your copy of this book, in hardcover or paperback, he will send you, in exchange, a 3.5” floppy disk, on which will be a complete digital manuscript of this work, albeit with all names and locations changed, in such a way that the only people who will know who is who are those whose lives have been included, though thinly disguised. Voila! Fiction!"

It's a literary joke rooted in an old idea. The reason we believe that fiction is easier to take than the truth is because fiction requires, as Coleridge famously put it, a willing suspension of disbelief. This means, of course, that we can always suspend our suspension, return to reality, break the spell. Fiction is safer because it gives us an exit - all we have to do is remember that it's fiction.

Such intuitions about the emotional impotence of fiction (and the greater impact of The Truth) underpin a vast amount of culture. It's why there's something extra serious about movies that begin with the words "based on a true story," and why fantasy novels and comic books are considered such escapist fare. It's why horror movies need camp - we have to be reminded that it's fake, or else we'd be too scared - and why we take pulp fiction to the beach. (The truth is less relaxing.) Even my three year old daughter gets it: when she's frightened by a My Little Pony monster, she tells herself that it's all pretend. Just a cartoon. The artifice of the art is her comfort.

It's an intuition that makes sense. It sounds right. It feels right.

But it's wrong.

That, at least is the conclusion of a new study published in the Journal of Consumer Research by Jane Ebert at Brandeis University and Tom Meyvis at NYU that tested the emotional impact of fiction versus non-fiction. In one experiment, the scientists gave several dozen undergraduates a tragic story to read about a young girl who died from meningitis. Some of the subjects were randomly assigned to the "real" condition - they were told the story was true - while others were told it was a work of fiction. Then, they were asked to rate, on a nine point scale, the extent to which the story made them feel sad and distressed. Although people expected the true story to have a greater emotional impact, that wasn't what happened. Instead, those assigned to the fictional condition - they were told the death was pretend - actually felt slightly more negative emotion. The difference wasn't statistically significant (a mean of 5.79 versus 6.18) but the aesthetic expectations of the subjects were still incorrect. In short, we are much better at suspending our disbelief than we believe.

Ebert and Meyvis confirmed this in a follow-up study. Two hundred and seventy undergraduates were shown the last eight minutes of The Champ, a "movie about an ex-boxer who fights one last fight to give his young son a better future." (Spoiler alert: the boxer dies, and his son weeps over his body.) Once again, they were randomly assigned to a fictional story condition - "none of the events depicted in the movie actually happened" - or a true story condition, in which they were told that the movie was a dramatized version of a real life. As expected, there was no significant difference between the emotional reaction of those who thought the movie was pretend and those who thought it was true. However, there was one condition in which believing The Champ was fiction made a difference: when the viewing of the movie was briefly interrupted - the subjects were told, in advance, that the movie needed to be downloaded from a remote server - those who believed it was all make-believe felt significantly less sad. (Breaks didn't affect the experience of those told it was true.) According to the scientists, the brief interruptions shattered the illusion of the art, giving viewers a chance to remind themselves that it was only art.

Of course, we often watch emotional shows filled with breaks - they're called commercials. Given the data, it's interesting to think about the toll of these breaks. One possibility is that watching television shows without commercials - as happens on Netflix or HBO - provides viewers with a far a more affecting experience. But the researchers speculate that the reality of viewing is a bit more complicated. “While we don't test this in our research, we speculate that the effects of commercials will depend on what consumers do during them,” wrote Professor Ebert in an email. “If viewers are distracted by the commercials, then they may not be able to incorporate the real/fictional information while watching the movie - i.e., they won't be able to remind themselves it is only fictional. However, if viewers pay little attention to the ads they may be able to incorporate this information.” If true, this would imply that the problem isn’t commercials per se - the problem is bad commercials, since they’re the ones that interrupt the emotional spell. (I assume the same goes for DVR viewing, which requires us to fast-forward through several minutes of blurry ads.)

The larger lesson is that people are not very good at predicting their emotional reactions to aesthetic experiences. Despite a lifetime of practice, we still falsely assume that fiction won't touch us deep, that we'll be less moved by whatever isn't real. But we're wrong. And so we're gripped by Tolstoy and cry to Nicholas Sparks; we're wrecked by Game of Thrones and scared by Spider-Man. We underestimate the power of art, but the art doesn't care - it will make us feel anyway.

Ebert, Jane, and Tom Meyvis. "Reading Fictional Stories and Winning Delayed Prizes: The Surprising Emotional Impact of Distant Events.” Journal of Consumer Research. October 2014.

Are You Paying Attention?

Thank you for participating in my psychology experiment on decision-making. Please read the instructions below:

Most modern theories of decision-making recognize the fact that decisions do not take place in a vacuum. Individual preferences and knowledge, along with situational variables can greatly impact the decision process. In order to facilitate our research on decision-making we are interested in knowing certain factors about you, the decision maker. Specifically, we are interested in whether you actually take the time to read the directions; if not, then some of our manipulations that rely on changes in the instructions will be ineffective. So, in order to demonstrate that you have read the instructions, please ignore the sports items below. Instead, simply continue reading after the options. Thank you very much.

Which of these activities do you engage in regularly? (write down all that apply)

1)    Basketball

2)    Soccer

3)    Running

4)    Hockey

5)    Football

6)    Swimming

7)    Tennis

Did you answer the question? Then you failed the test.

The procedure above is known as an Instructional Manipulation Check, or IMC. It was first outlined in a 2009 paper, published in The Journal of Experimental Psychology, by the psychologists Daniel Oppenheimer, Tom Meyvis and Nicolas Davidenko. While scientists have always excluded those people who blatantly violate procedure – these are the outliers whose responses are incoherent, or fall many standard deviations from the mean – it’s been much harder to identify subjects whose negligence is less overt. The IMC is designed to filter these people out.

The first thing to note about the IMC is that a lot of subjects fail. In a variety of different contexts, Oppenheimer et al. found that anywhere from 14 to 46 percent of survey participants taking a survey on a computer did not read the instructions carefully, if at all.

Think, for a moment, about what this means. These subjects are almost certainly getting compensated for their participation, paid in money or course credit. And yet, nearly half of them are skimming the instructions, skipping straight ahead to the question they’re not supposed to answer.

This lack of diligence can be a serious scientific problem, introducing a large amount of noise to surveys conducted on screens. Consider what happened when the psychologists tried to replicate a classic experiment in the decision-making literature. The study, done by Richard Thaler in 1985, goes like this:

You are on the beach on a hot day. For the last hour you have been thinking about how much you would enjoy an ice-cold can of soda. Your companion needs to make a phone call and offers to bring back a soda from the only nearby place where drinks are sold, which happens to be a [run-down grocery store] [fancy resort]. Your companion asks how much you are willing to pay for the soda and will only buy it if it is below the price you state. How much are you willing to pay?

The results of Thaler’s original experiment showed that people were willing to pay substantially more for a drink from a fancy resort ($2.65) than from a shabby grocery store ($1.50), even though their experience of the drink on the beach would be identical.  It’s not a rational response, but then we’re not rational creatures. (Thaler explained this result in terms of “transaction utility,” or the tendency of people to make consumption decisions based on “perceived merits of the deal,” and not some absolute measure of value.)

As Oppenheimer et. al point out, the IMC is particularly relevant in experiments like this, since the manipulation involves a small change in the text-heavy instructions (i.e., getting a drink from a resort or a grocery store.) When Oppenheimer et. al first attempted to replicate the survey, they couldn’t do it; there was no price difference between the two groups. However, after the scientists restricted the data set so that only those participants who passed the IMC were included, they were able to detect a large shift in preferences: people really were willing to pay significantly more for a drink from a resort. The replication of Thaler’s classic paper isn’t newsworthy, of course; it’s already been cited more than 3,800 times. What is interesting, however, is that the online replication of an offline experiment required weeding out less attentive subjects.

The results of the IMC give us a glimpse into the struggles of modern social science: it’s not easy finding subjects who care, or who can stifle their boredom while completing a survey. If nothing else, it’s a reminder of our natural inattentiveness, how the mind is tilted towards distraction. As such, it’s a cautionary tale for all those scientists, pollsters and market researchers who assume people are paying careful attention to their questions. As Jon Krosnick first pointed out in 1991, most surveys require cognitive effort. And since we’re cognitive misers – always searching for the easy way out – we tend to engage in satisficing, or going with the first acceptable alternative, even when it’s incorrect.

This is a methodological limitation that’s becoming more relevant. In recent years, scientists have increasingly turned to online subjects to increase their n, recruiting participants on Amazon’s Mechanical Turk and related sites. This approach comes with real upside: for one thing, it can get psychology beyond its reliance on Western Educated subjects from Industrialized, Rich and Democratic countries. (This is known as the W.E.I.R.D. problem. It’s a problem because the vast majority of psychological research is conducted on a small, and highly unusual, segment of the human population.)

The failure rates of the IMC, however, are a reminder that this online approach comes with a potential downside. In a recent paper, a team of psychologists led by Joseph Goodman at Washington University tested the IMC on 207 online subjects recruited on Mechanical Turk. The scientists then compared their performance to 131 university students, who had been given the IMC on a computer or on paper. While only 66.2 percent of Mechanical Turk subjects passed the IMC, more than 90 percent of students taking the test on paper did. (That's slightly higher than the percentage of students who passed on computers.) Such results lead Goodman, et. al to recommend that “researchers use screening procedures to measure participants’ attention levels,” especially when conducting lengthy or complicated online surveys.

We are a distractible species. It’s possible we are even more distracted on screens, and thus less likely to carefully read the instructions. And that’s why the IMC is so necessary: unless we filter out the least attentive among us, then we’ll end up collecting data limited by their noise. Such carelessness, of course, is a fundamental part of human nature. We just don’t want it to be the subject of every study.

Oppenheimer, Daniel M., Tom Meyvis, and Nicolas Davidenko. "Instructional manipulation checks: Detecting satisficing to increase statistical power."Journal of Experimental Social Psychology 45.4 (2009): 867-872.

Goodman, Joseph K., Cynthia E. Cryder, and Amar Cheema. "Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples."Journal of Behavioral Decision Making 26.3 (2013): 213-224.

 

The Draw-A-Person Test

Imagine a world where intelligence is measured like this:

A child sits down at a desk. She is given a piece of paper and a crayon. Then, she is asked to draw a picture of a boy or girl. “Do the best that you can,” she is told. “Make sure that you draw all of him or her.” If the child hesitates, or asks for help, she is gently encouraged: “You draw it all on your own, and I’ll watch you. Draw the picture any way you like, just do the best picture you can.”

When the child is done drawing, the picture is scored. It’s a simple process, with little ambiguity. One point is awarded for the “presence and correct quantity” of various body parts, such as head, eyes, mouth, ears, arms and feet. (Clothing gets another point.) The prettiness of the picture is irrelevant. Here are six drawings from four-year olds:

The Draw-A-Person test was originally developed by Florence Goodenough, a psychologist at the University of Minnesota. Based on her work with Lewis Terman – she helped revise and validate the Stanford-Binet I.Q. test – Goodenough became interested in coming up with a new measure of intelligence that could be given to younger children. And so, in 1926, she published a short book called The Measurement of Intelligence by Drawings which described the Draw-A-Person test.* Although the test only takes a few minutes, Goodenough argued that it provided a window into the child mind, and that “the nature and content of children’s drawings are dependent primarily upon intellectual development.” In other words, those scrawls and scribbles were not meaningless marks. Rather, they reflected something fundamental about the ways in which we made sense of the world. The act of expression was an act of intelligence, and should be treated as such.

In her book, Goodenough described the obvious benefits of her intelligence test. It was fast, cheap and fun. What’s more, it seemed to be measuring something real, as children tended to generate a consistent set of scores over time. (In other words, the test was reliable.) And yet, despite these advantages, the Draw-A-Person test largely fell out of favor by the 1970s. One explanation is that it was lumped in with other “projective” techniques, such as the Rorschach Test, that were repeatedly shown to be inaccurate, too tangled up with psychoanalytic speculation.

However, a new study by Rosalind Arden and colleagues at King’s College London suggests that Goodenough’s test still has its uses, and that it manages to quantify something important about the developing mind in less than ten minutes. “Goodenough’s genius was to take a common childhood product and see its potential as an indicator of cognitive ability,” they write. “Our data show that the capacity to realize on paper the salient features of a person, in a schema, is an intelligent behavior at age 4. Performance of this drawing task relies on various cognitive, motoric, perceptual, attentional, and motivational capacities.”

How’d the scientists show this? By giving the test to 7,752 pairs of British twins, the scientists were able to compare the drawing performance of identical twins, who share all of their genetic material, with that of non-identical twins, who only share about half. This allowed them to tease out the relative importance of genetics in determining scores on the Draw-A-Person test. (All of the twin pairs were raised in the same household, at least until age 4, so they presumably had a similar home environment.) The results were interesting, as the drawings of identical twins were much more similar than those of non-identical twins. There is no drawing gene, of course, but this result does suggest that the sketches of little kids are shaped by their genetic inheritance. In fact, the results from a single drawing were as heritable among the twin pairs as their scores on more traditional intelligence tests.

Furthermore, because the researchers had scores from these intelligence tests they were able to compare performance on the Draw-A-Person test with a subject’s g factor, or general intelligence. The correlations were statistically significant but relatively modest, which is in line with previous studies. This means that one shouldn’t try to predict IQ scores based on the scribbles of a toddler; the two variables are related, but in weak ways.

However, a more interesting result emerged over time, as the scientists looked at the relationship between drawing scores at the age of 4 and measures of intelligence a decade later, when the twins were 14. According to the data, the children’s pictures were just as predictive of their intelligence scores at the age of 14 as various intelligence tests given at the age of 4. "This study does not explain artistic talent,” write the scientists. “But our results do show that whatever conflicting theories adults have about the value of verisimilitude in early figure drawing, children who express it to a greater extent are somewhat brighter than those who do not." 

Such studies trigger a predictable reaction in parents. I've got a three-year old daughter - I couldn't help but inspect her latest drawings, counting up the body parts. (There's even an app that will help you make an assessment.) But it's important to note that this is all nonsense; the science does not support my anxieties. "I too fossicked around in old drawers to look for body-parts among the fridge-magnet scrawls of my former 4-year old," Dr. Arden wrote in an email. "I realised quickly the key question was not 'is she bright?', but 'did we have fun? Did I treasure that wonderful, lightspeed flashing childhood properly?'" In a recent article put out by King's College, Arden expands on this idea, observing that while her "findings are interesting, it does not mean that parents should worry if their child draws badly. Drawing ability does not determine intelligence, there are countless factors, both genetic and environmental, which affect intelligence in later life.”

I find this study most interesting as a history-of-science counter factual, a reminder that there are countless ways to measure human intelligence, whatever that is. We've settled on a particular concept of intelligence defined by a short list of measurable mental talents. (Modern IQ tests tend to focus on abilities such as mental control, processing speed and quantitative reasoning.) But Goodenough’s tool is proof that the mystery of smarts has no single solution. The IQ test could have been a drawing test.

This sounds like a silly conjecture. But it shouldn’t. As the scientists note, figurative art is an ancient skill. Before there were written alphabets, or counting systems, humans were drawing on the walls of caves. (There’s evidence that children participated in these rituals as well, dragging their tiny fingers through the wet clay and soft cave walls.) "This long history endows the drawing test with ecological validity and relevance to an extent that is unusual in psychometrics," write the scientists. After all, the Make-A-Person test measures one of the most uniquely human talents there is: the ability to express the mind on the page, to re-describe the world until life becomes art, or at least a crayon stick figure.

*Goodenough originally called it the Draw-A-Man test, but later realized that the gendered description made it harder for young girls.

Arden, Rosalind, et al. "Genes Influence Young Children’s Human Figure Drawings and Their Association With Intelligence a Decade Later." Psychological Science (2014)

The Tragedy of Leaded Gas

In December 1973, the EPA issued new regulations governing the use of lead in gasoline. These rules, authorized as part of the Clean Air Act and signed into law by President Nixon, were subject to years of political and legal wrangling. Automobile manufacturers insisted the regulations would damage car engines; oil companies warned about a spike in gasoline prices; politicians worried about the negative economic impact. In 1975, a consortium of lead producers led by the Ethyl Corporation and DuPont sued the EPA in an attempt to stop the regulations from taking effect. They argued that “lead is naturally present in the environment” and that the health impact of atmospheric lead remained unclear.

The EPA won the lawsuit. In a March 1976 opinion, the U.S. Court of Appeals of the District of Columbia established the so-called precautionary principle, noting that the potential for harm – even if it has not been proven as fact – still leaves society with an obligation to act. “Man’s ability to alter his environment,” wrote the judges, “has developed far more rapidly than his ability to foresee with certainty the effects of his alterations.” And so the phaseout of leaded gasoline took hold: by 1990, the amount of lead in gasoline had been reduced by 99 percent.*

This federal regulation is one of the most important achievements of the American government in the post WWII era. That it’s a largely unanticipated achievement only makes it more remarkable. According to the latest data, the removal of lead from gasoline is not simply a story of clean air and blue skies. Rather, it has become a tale of sweeping social impact, a case-study in how the removal of a single environmental toxin can influence everything from IQ scores to teenage pregnancy to rates of violent crime.

For the last several years, Jessica Wolpaw Reyes, an economist at Amherst College, has been studying the surprising impact of this environmental success. Her studies take advantage of a natural experiment: for a variety of “mostly random” reasons, including the distribution network of petroleum pipelines, the number of pumps available at gas stations and the local assortment of cars, the phaseout of leaded gasoline didn’t happen at a uniform rate across the country. Rather, different states showed large variation in their consumption of leaded gasoline well into the 1980s. If lead poisoning was largely responsible for the spike in criminal behavior – rates of violent crime in America quadrupled between 1960 and 1991 - then the removal of lead should predict the pace of its subsequent decline. (In many American cities, crime has returned to pre-1965 levels.) In other words, the first states to transition fully to unleaded gasoline should also be the first to experience the benefits.

That’s exactly what Reyes found. In a 2007 study, Reyes concluded that “the phase-out of lead from gasoline was responsible for approximately a 56 percent decline in violent crime” in the 1990s. What’s more, Reyes predicted that the Clean Air Act would continue to generate massive societal benefits in the future, “up to a 70 percent drop in violent crime by the year 2020.” And so a law designed to get rid of smog ended up getting rid of crime. It’s not the prison-industrial complex that keeps us safe. It’s the EPA.

As Reyes herself noted, these correlations raise far more questions than they answer. She concluded her 2007 paper, for instance, by noting that if the causal relationship between lead and crime were real, and not a statistical accident, then the rate of lead removal should also be linked to other behavioral problems, including substance abuse, teenage pregnancy, and childhood aggression. Violent crime, after all, does not exist in a vacuum.

In an important new working paper, Reyes has expanded on her previous research, showing that exposure to lead in early childhood has far-reaching negative effects. By employing data on more than eleven thousand children from the National Longitudinal Survey of Youth (NLSY), she has revealed the relationship between levels of lead in the blood and impulsive behavior in a number of domains. Consider the steep decline in teenage pregnancy in the 1990s, which has proved difficult to explain. According to Reyes, changes in lead levels caused by the Clean Air Act have played a very significant role:

“To be specific, we can consider the change in probability associated with a change in blood lead from 15 µg/dl to 5 µg/dl, a change that approximates the population-wide reduction that resulted from the phaseout of lead from gasoline. This calculation yields a predicted 12 percentage point decrease in the likelihood of pregnancy by age 17, and a 24 percentage point decrease in the likelihood of pregnancy by age 19 (from a 40% chance to a 16% chance). This is undoubtedly large: the lead decrease reduces the likelihood of teen pregnancy by more than half.

Similar patterns held for aggressive behavior and criminal behavior among teenagers. In both cases, the rise and fall of these social problems appears to be closely correlated with the rise and fall of leaded gasoline. In short, says Reyes, exposure to lead “triggers an unfolding series of adverse behavioral outcomes.” It makes it harder to children to resist their most risky impulses, whether having unprotected sex or getting into a violent fight. (Other research shows that lead is closely linked to lower IQ scores: the typical increase in lead levels caused by leaded gasoline decreases IQ scores, on average, by roughly six points.) Placed in this context, the correlation with crime rates is no longer so surprising. Rather, it’s the natural outgrowth of a poisoned generation of children, unable to fully control themselves.

There’s one last interesting conclusion in Reyes’ new study. Because the NLSY survey contained information about parental income and education, she was able to see how leaded gasoline impacted kids across the socioeconomic spectrum. While most environmental toxins disproportionately harm poor families – they can’t afford to live in less polluted places – leaded gasoline was, in the words of Dr. Herbert Needleman, an “equal opportunity pollutant…not limited to poor African-American children.” In fact, as Reyes points out, atmospheric lead was one of the few adverse environmental influences that wealthier families could not escape, as “it was in the very air children breathed.” As a result, Reyes’ analysis shows that the children of higher-income parents were, on average, more harmed by leaded gasoline, showing a steeper drop-off across a range of negative behavioral outcomes. “In a way, the advantaged children had more to lose,” Reyes writes. “Consequently, gasoline lead may have been an equalizer of sorts.”

There are, of course, inherent limitations to these sorts of econometric studies. There might be hidden confounds, or systematic differences between generations of children that are unaccounted for by the statistical model. As Jim Manzi has pointed out, the variation in the state-by-state adoption rates of unleaded gasoline might not be quite as random as it seems, but instead be linked to subtle “differences in political economy that in turn will affect changes in crime rates.” Society is more complicated than our statistics.

But it’s important to note that the link between lead and societal problems is not merely a statistical story. Rather, it is rooted in decades of neurological evidence, which tell the same causal tale at a cellular level. Lead has long been recognized as a neurotoxin, interfering with the release of transmitters in the brain. (The chemical seems to have a particular affinity for the NMDA receptor, a pathway essential for learning and memory.) Other studies have shown that high levels of lead to apoptosis, a fancy word for the mass suicide of brain cells. And then there’s the Cincinnati Lead Study, which has been tracking 376 children born between 1979 and 1984 in the poorer parts of the city. While the study has shown a strong link between lead exposure and violent crime – for every 5 ug/dl increase in blood levels at the age of six, the risk of arrest for a violent crime as a young adult increases by nearly 50 percent – it has also investigated the impact of this exposure to lead on the brain. In a 2008 paper published in PLOS Medicine, a team of researchers led by Kim Cecil used MRI scans to measure the brain volume of enrolled subjects who are now between the ages of 19 and 24. The scientists found a clear link between lead levels in early childhood and the loss of brain volume in adulthood. Most telling was where the loss occurred, as the scientists found the greatest damage in the prefrontal cortex, a region closely associated with impulse control, emotional regulation and goal planning. (The correlations were strongest among male subjects, which might explain why men with lead exposure are more prone to antisocial behavior.)

At the end of her new working paper, Reyes makes an argument for “strengthening the threads” between disparate disciplines, closing the explanatory gap between policy-makers, public health professionals, environmentalists and social scientists. As she notes, it’s becoming increasingly clear that the boundaries of these fields overlap, and that any complete explanation of a complex social phenomena (say, the fall in crime rates) must also concern itself with leaded gasoline, the prefrontal cortex and economic inequality. “The foregoing results suggest that lead – and other environmental toxicants that impair behavior – may be missing links in social scientists’ explanations of social behavior,” Reyes writes. “Social problems may be, to some degree, rooted in environmental problems.”

*Despite the legal decision, the lead industry continued to fight the implementation of the EPA regulations. As Gerald Markowitz and David Rosner argue in Lead Wars, the main impetus for the removal of lead from gasoline was not the new rules themselves but rather the introduction of catalytic converters, which were installed to combat sulfur emissions. Because lead damaged the platinum catalyst in the converter, General Motors and other car manufacturers were eventually forced to call for the end of leaded gasoline. 

Via: Marginal Revolution

Reyes, Jessica Wolpaw. "Lead exposure and behavior: Effects on antisocial and risky behavior among children and adolescents." NBER Working Paper, August 2014

Reyes, Jessica Wolpaw. "Environmental policy as social policy? The impact of childhood lead exposure on crime." The BE Journal of Economic Analysis & Policy 7.1 (2007).

Markowitz, Gerald, and David Rosner. Lead Wars: The Politics of Science and the Fate of America's Children. Univ of California Press, 2013. p. 77-80

Communism, Inequality, Dishonesty

Dan Ariely has been trying, for years, to find evidence that different cultures give rise to different levels of dishonesty. It's an attractive hypothesis – “It seems like it should be true,” Ariely told me - and would add to the growing literature on the cultural influences of human nature. No man is an island, etc.

Unfortunately, Ariely and his collaborators have been unable to find any solid evidence that such differences in dishonesty exist. He's run experiments in the United States, Italy, England, Canada, Turkey, China, Portugal, South Africa and Kenya, but every culture looks basically the same. Bullshit appears to be a behavioral constant.

Until now.

A new study by Ariely, Ximena Garcia-Rada, and Heather Mann at the Duke University Center for Advanced Hindsight and Lars Hornuf at the University of Munich has found a significant difference in levels of dishonesty among German citizens. But here’s the catch – these differences exist within the sample, between people with East German and West German roots.

The experiment went like this. A subject was given a standard six-sided die and asked to throw it forty times. Before the throwing began, he or she was told to pick one side of the die (top or bottom) to focus on. After each throw, the subject wrote down the score from their chosen side. Reporting higher scores made them more likely to get a bigger monetary payout at the end of the experiment.

What does this have to do with lying? Because the subjects never told the scientist which side of the die they selected, they could cheat by writing down the higher number, switching between the top and bottom of the die depending on the roll. For instance, if they rolled a one, they could pretend they had selected the bottom side and report a six instead.

Not surprisingly, people took advantage of the wiggle-room, reporting numbers that were higher than expected given the laws of chance. What was a bit more surprising, at least given Ariely’s history of null results, was that East Germans were significantly more dishonest. While those with roots in the West reported high rolls (4,5 or 6) on 55 percent of their throws, those from the East reported high rolls 60 percent of the time. “Since the scale of possible cheating ranges from 50 percent high rolls to 100 percent high rolls, cheating by West Germans corresponds to 10 percent and cheating by East Germans to 20 percent of what had been feasible,” write the scientists. “Thus, East Germans cheated twice as much as West Germans overall.”

There are a few possible explanations here. The first is that the communist experience of East Germans undermined their sense of honesty. As the scientists note, life in East Germany was defined by layers of deceit. “In many instances, socialism pressured or forced people to work around official laws,” they write. And then there was the Stasi intelligence bureaucracy, which spied on more than a third of all East German citizens. “Unlike in democratic societies, freedom of speech did not represent a virtue in socialist regimes,” write Ariely, et. al. “It was therefore often necessary to misrepresent your thoughts to avoid repressions from the regime.” And so lying became an East German habit, a means of survival, a way of coping with the scarcity and repression. This helps explain why older East Germans – they spent more time under the communist regime – were also more likely to cheat. “Socialistic regimes in general are corrupt, but I don’t think that has to be the case,” Ariely told me. “Personally, I think that in a small socialist society, like a kibbutz, socialism could prosper without corruption.”

But there’s another possible explanation, which is less about the ideological struggle of the Cold War and more about the particular politics of Germany. According to this account, the primary cause of East German dishonesty is not the crooked influence of socialism but rather the hazards of social comparison. East Germans aren’t more dishonest because of their communist experience – they’re more dishonest because of their post-communist existence.

A little history might be helpful. In the run-up to unification, German Chancellor Helmut Kohl famously declared that the five states of Eastern Germany would quickly become “blooming landscapes” under the capitalist system. That didn’t happen. Instead, East Germany was defined by a surge of bankruptcies, chronic unemployment and mass migration. While the situation has certainly improved in recent years – the unemployment rate is “only” a third higher in the East – German income still shows a sharp geographic split, with East Germans making 30 percent less money on average. “If you were born in the East, unification came with lots of promises,” Ariely says. “These promises did not come to full fruition. And I think if you’re an East German then you’re reminded every day of these broken promises…Even generations later there’s still a financial gap.”

Such resentments have real consequences. Previous research has shown that exposing people to abundant wealth, such as a large pile of cash, leads to higher levels of cheating. The same pattern exists when people feel underpaid and when they believe that they’ve been treated unfairly. In short, there appears to be something contagious about ethical lapses. In an unjust world, anything goes; since nothing can make it right, we might as well do wrong.

While both explanations might contribute to the observed result, it’s worth noting that these explanations come with contradictory implications. If communism itself is the problem, then the admirable goal of social equality is inherently flawed, since it’s bound up with increased levels of dishonesty. “To ensure that everyone gets the same thing, you need to give some people less than they deserve, or they think they deserve,” Ariely says. “And when people feel life has treated them unfairly, maybe they feel more okay with cheating and lying.”

However, if the main cause of East German dishonesty is social comparison – those feelings of inferiority generated by being a poor person in a rich country – then the problem isn’t the political quest for equality: it’s current levels of inequality in wealthy capitalist societies. (Remember that Chinese citizens did not show higher levels of dishonesty, which suggests that communism is not solely responsible for the effect.) “It’s getting to the point where there are very few places where the rich and poor really interact,” Ariely says, in reference to the United States. “The contrast is getting more obvious, and that’s a painful daily reminder if you’re not well off.” These reminders seem to make us less honest, or at least more willing to cheat.

So there is no obvious cure. The noble ethos of Marx – “From each according to his ability, to each according to his need” – seems just as problematic as the unequal outcomes of modern capitalism, in which some mixture of ability and luck determine all. Every political system has flaws that make us dishonest, which is another way of saying that maybe the problem isn’t the system at all.

Ariely, Dan, et al. "The (True) Legacy of Two Really Existing Economic Systems." (2014).

The Purpose Driven Life

Viktor Frankl was trained as a psychiatrist in Vienna in the early 1930s, during the peak of Freud’s influence. He internalized the great man’s theories, writing at one point that “all spiritual creations turn out to be mere sublimations of the libido.” The human mind, powered by its id engine, wanted primal things. Mostly, it just wanted sex.

Unfortunately, Frankl didn’t find this therapeutic framework very useful. While working as a doctor in the so-called “suicide pavilion” at the Steinhof hospital – he treated more than 1200 at-risk women over four years - Frankl began to question his training. The pleasure principle, he came to believe, was not the main motive of existence; the despair of these women was about more than a thwarted id.

So what were these women missing? Why were they suicidal? Frankl’s simple answer was that their depression was caused by a lack of meaning. The noun is deliberately vague, for there is no universal fix; every person’s meaning will be different. For some people, it was another person to care for, or a lasting relationship. For others, it was an artistic skill, or a religious belief, or an unwritten novel. But the point was that meaning was at the center of things, for “life can be pulled by goals as surely as it can be pushed by drives.” What we craved wasn’t happiness for its own sake, Frankl said, but something to be happy about.

And so, inspired by this insight, Frankl began developing his own school of psychotherapy, which he called logotherapy. (Logos is Greek for meaning; therapeuo means “to heal or make whole.” Logotherapy, then, literally translates as “healing through meaning.”)  As a clinician, Frankl’s goal was not the elimination of pain or worry. Rather, it was showing patients how to locate a sense of purpose in their lives. As Nietzsche put it, “He who has a why to live can bear with almost any how.” Frankl wanted to help people find their why.

Logotherapy now survives primarily as a work of literature, closely associated with Frankl’s best-selling Holocaust memoir, Man’s Search for Meaning. Amid the horrors of Auschwitz and Dachau, Frankl explored the practical utility of logotherapy. In the book he explains, again and again, how a sense of meaning helped his fellow prisoners survive in such a hellish place. He describes two men on the verge of suicide. Both of the inmates used the same argument: “They had nothing more to expect from life,” so they might as well stop living in pain. Frankl, however, used his therapeutic training to convince the men that “life was still expecting something from them.” For one man, that meant thinking about his adored child, waiting for him in a foreign country. For the other man, it was his scientific research, which he wanted to finish after the war. Because these prisoners remembered that their life still had meaning, they were able to resist the temptation of suicide. 

I was thinking of Frankl while reading a new paper in Psychological Science by Patrick Hill and Nicholas Turiano. The research explores one of Frankl’s essential themes: the link between finding a purpose in life and staying alive. The new study picks up where several recent longitudinal studies have left off. While prior research has found a consistent relationship between a sense of purpose and “diminished mortality risk” in older adults, this new paper looks at the association across the entire lifespan. Hill and Turiano assessed life purpose with three questions, asking their 6163 subjects to say, on a scale from 1 to 7, how strongly they disagreed or agreed with the following statements:

  1. Some people wander aimlessly through life, but I am not one of them.
  2. I live life one day at a time and don’t really think about the future.
  3. I sometimes feel as if I’ve done all there is to do in life.

Then the scientists waited. For 14 years. After counting up the number of deaths in their sample (569 people), the scientists looked to see if there was any relationship between the people who died and their sense of purpose in life.

Frankl would not be surprised by the results, as the scientists found that purpose was significantly correlated with reduced mortality. (For every standard deviation increase in life purpose, the risk of dying during the study period decreased by 15 percent. That’s roughly equivalent to the reduction in mortality that comes from a engaging in a modest amount of exercise.) This statistical relationship held even after Hill and Turiano corrected for other markers of psychological well-being, such as having a positive disposition. Meaning still mattered. A sense of purpose – regardless of what the purpose was – kept us from death. “These findings suggest the importance of establishing a direction for life as early as possible,” write the scientists.

Of course, these correlations cannot reveal their cause. One hypothesis, which is currently being explored by Hill and Turiano, is that people with a sense of purpose are also more likely to engage in healthier behaviors, if only because they have a reason to eat their kale and go the gym. (Nihilism leads to hedonism.) But that’s only a guess. Frankl himself remained metaphysical to the end.  The closest he ever got to a testable explanation was to insist that man was wired for “self-transcendence,” which Frankl defined as being in a relationship with “someone or something other than oneself.” While Freud stressed the inherent selfishness of man, Frankl believed that we needed a purpose as surely as we needed sex and water and food. We are material machines driven by immaterial desires.

Frankl, Viktor E. Man's Search for Meaning. Simon and Schuster, 1985.

Haddon Klingberg, Jr. When Life Calls Out To Us: The love and lifework of Viktor and Elly Frankl. Random House, 2012.

Hill, Patrick L., and Nicholas A. Turiano. "Purpose in Life as a Predictor of Mortality Across Adulthood." Psychological Science (2014): 0956797614531799.

The Too-Much-Talent Effect

A few years ago, the psychologists Adam Galinsky and Roderick Swaab began working on a study that looked at the relationship between national levels of egalitarianism – the belief that everyone deserves equal rights and opportunities – and the performance of national soccer teams in international competitions like the World Cup. It was an admittedly speculative hypothesis, an attempt to find a link between a vague cultural ethos and success on the field. But their logic went something like this: because talented athletes often come from impoverished communities, the most successful countries in the highly competitive World Cup would find a way to draw from the biggest pools of human talent. Think here of the great Pele, who was too poor to afford a soccer ball so he practiced his kicks with a grapefruit instead. Or the famous Diego Maradona, born in a shantytown on the outskirts of Buenos Aires. These men had talent but little else. It is a testament to egalitarianism that they were still able to get the opportunities to succeed.

It’s a nice theory, but is it true? After controlling for a number of variables, including GDP, population size, length of national soccer history and climate, Galinsky and Swaab found that egalitarianism was, indeed, “strongly linked” to better performance in international competition. It also predicted the quantity of talent on each team, with more egalitarian countries producing more players under contract with elite European clubs. In short, the most successful soccer countries don’t necessarily have the most innately talented populations. Instead, they do a better job of not squandering the talent they already have. 

It’s a fascinating study with broad implications. It suggests, for one thing, that much of the national variation in performance – and it doesn’t matter if we’re talking about the soccer pitch or 8th grade math scores – has to do with how well countries utilize their available human capital. What T.S. Eliot said about the excess of literary geniuses during the Elizabethan age (Shakespeare, Marlowe, Spenser, Donne, etc.) turns out to be a far more general truth. “The great ages did not perhaps produce much more talent than ours,” Eliot wrote, “but less talent was wasted.”

So far, so interesting. But as often happens in science, answers have a slippery way of inspiring new questions; the scientific process is a perpetual mystery generating machine. And it’s this next mystery – one utterly unrelated to egalitarianism – that most interests me.

While analyzing the soccer data, Galinsky and Swaab noticed something very peculiar – at a certain point, having more highly talented players on a national team led to worse performance. It was an unsettling finding, since people generally assume that talent exists in a linear relationship with success. (More talent is always better.) Such logic underpins the frenzy of NBA free-agency – every team is begging for superstars – and the predictions of bookies and commentators, who believe that the most gifted teams are the most likely to win. It’s why an already loaded Barcelona team just spent more than $100 million to acquire Luis Suarez, a player who has become as famous for biting as he has for striking.

And so, armed with this anomaly, Galinsky, Swaab and colleagues at INSEAD, Columbia University and VU University Amsterdam, decided to continue the investigation. After confirming the result among soccer teams competing at the 2010 and 2014 World Cup – too much talent appeared to be a burden, making national teams less likely to win – the scientists decided to see if their findings could be extended to other sports.

They turned first to basketball, looking at the impact of top talent on NBA team performance between 2002 and 2012. They coded talent by looking at the Estimated Wins Added (EWA) statistic, a measure that reflects the approximate number of wins a given player adds to a team’s season total. (In the 2013-2014 season, Kevin Durant led the league with an EWA of 30.1. LeBron was second with 27.3.) Once again, talent exhibited a tipping point: NBA teams benefited from having the best players unless they had too many of them. While most general managers assume the link between talent and performance is linear – a straight line with an upward slope – the scientists found that it was actually curved, and teams with more than 60 percent top talent did worse than their less skilled competition. Swaab and Galinsky call this the “too-much-talent” effect.

The relationship between team talent levels and team performance in the NBA

The relationship between team talent levels and team performance in the NBA

What accounts for the negative returns of excessive talent? The problem isn’t talent itself; there’s nothing inherently wrong with gifted players. Rather, Galinsky and Swaab argue that too much talent can disrupt the dynamics required for effective teamwork. “Too much talent is really a metaphor for having ineffective coordination among players,” Galinsky says. “Sometimes, you need a hierarchy on a team. You need to have different roles. But if everyone thinks they should be the one with the ball, then you’re going to run into problems.” Galinsky, et al. documented this drop-off in coordination by tracking various measures of “intra-team coordination,” such as the number of assists and defensive rebounds per game. (Both stats require teammates to work together.) Sure enough, the-too-much-talent effect was mediated by a drop-off in effective coordination, as teams with too many top-flight athletes also struggled with their chemistry. The egos didn’t gel; the players competed for the spotlight; all the talent became a curse.

When I asked Galinsky for an example of a team undone by their surfeit of talent, he cites a 2013 quote from Mike D’Antoni, the head coach of a gifted Lakers team that woefully underperformed. (The starting five featured four probable Hall of Famers: Kobe Bryant, Steve Nash, Dwight Howard and Pau Gasol.) “Have you ever watched an All-Star game? It's god-awful,” D’Antoni said to reporters. “Everybody gets the ball and goes one on one and then they play no defense. That’s our team. That’s us. We’re an All-Star team.” The 2012-13 Lakers were swept by the Spurs in the first round of the playoffs. 

Likewise, the LeBron era Miami Heat only succeeded once their talented stars learned how to work together. “When Dwayne Wade got hurt [in 2012), the Heat became a less talented team,” Galinsky says. “But I think his injury also made it clear that he was subordinate to James, and that James was the true leader of the team. That helped them play together. Having less pure talent actually increased their performance.” This suggests that the too-much-talent effect might explain a bit of the The Ewing Theory, which occurs when a team performs better after the loss of one of its stars.

Of course, if athletic talent exists in a tensioned relationship with teamwork, then the effect should not exist in sports, such as baseball, that require less coordination.  “If you have five starting pitchers, those pitchers don’t need to like each other, because they all start on different days,” Galinsky says. “Too much talent shouldn’t be a big problem.” (The scientists quote Bill Simmons in their paper, noting that baseball is “an individual sport masquerading as a team sport.”) To test this hypothesis, Galinsky, et. al. used the Wins Above Replacement stat, or WAR, to assess the talent level of every MLB player. Then, they looked to see how different levels of team talent were related to team performance. As predicted, the relationship never turned negative: for baseball clubs, having more highly skilled players was always better. “These results suggest that people’s lay beliefs about the relationship between talent and performance are accurate, but only for tasks low in interdependence,” write the scientists.

The relationship between team talent levels and team performance in MLB

The relationship between team talent levels and team performance in MLB

These findings aren’t just relevant for sports teams. Rather, the scientists insist that the too-much-talent effect should apply to many different kinds of collective activity. While organizations place a big emphasis on acquiring top talent – it’s often their top HR priority – the importance of talent depends on the nature of the task. If success depends on the accumulation of individual performances – think of a sales team, or hedge fund traders – then more talent will lead to better outcomes. However, if success requires a high level of coordination among colleagues, then more talent can backfire, especially if the group lacks a clear hierarchy or well-defined roles.  And that’s why the best basketball teams, Galinsky argues, feature talented athletes who focus on different aspects of the game. “No one would argue that the Jordan era Bulls teams weren’t incredibly gifted,” he says. “But Jordan, Pippen and Rodman all understood their roles.  They knew what they needed to do.”

There is, I think, one final implication of this paper. In a world of moneyball GMs and SportVU tracking, it’s easy to dismiss the importance of team chemistry as yet another myth of the small data age, an intangible factor in a time of measurable facts. But this paper provides fans and coaches with a useful way of thinking about the importance of player chemistry, even if we still can’t reliably quantify it.* We’ve always known that team coordination matters, that a group of talented athletes can become more (or less) than the sum of their parts. But now we have empirical proof – a lack of chemistry is the one problem that more talent cannot solve.

*We might not be able to quantify player chemistry, but there does seem to be some consensus among players as to who has it. Talented athletes take big pay cuts to play with LeBron – he makes his teammates better - but Houston couldn't convince any superstars to play with Dwight Howard and James Harden. 

Swaab, Roderick I., and Adam D. Galinsky. "Egalitarianism Makes Organizations Stronger: Cross-National Variation in Institutional and Psychological Equality Predicts Talent Levels and the Performance of National Teams." Organizational Behavior & Human Decision Processes (forthcoming)

Swaab, Roderick I., et al. "The Too-Much-Talent Effect Team Interdependence Determines When More Talent Is Too Much or Not Enough." Psychological Science (2014)

 

 

 

"A Wandering Mind Is An Unhappy Mind"

Last year, in an appearance on the Conan O’Brien show, the comedian Louis C.K. riffed on smartphones and the burden of human consciousness:

"That's what the phones are taking away, is the ability to just sit there. That's being a person...Because underneath everything in your life there is that thing, that empty—forever empty. That knowledge that it's all for nothing and you're alone. It's down there.

And sometimes when things clear away, you're not watching anything, you're in your car, and you start going, 'Oh no, here it comes. That I'm alone.' It's starts to visit on you. Just this sadness. Life is tremendously sad, just by being in it...

That's why we text and drive. I look around, pretty much 100 percent of the people driving are texting. And they're killing, everybody's murdering each other with their cars. But people are willing to risk taking a life and ruining their own because they don't want to be alone for a second because it's so hard."

The punchline stings because it’s mostly true. People really hate just sitting there. We need distractions to distract us from ourselves. That, at least, is the conclusion of a new paper published in Science by the psychologist Timothy Wilson and colleagues. The study consists of 11 distinct experiments, all of which revolved around the same theme: forcing subjects to be alone with themselves for up to 15 minutes. Not alone with a phone. Alone with themselves.

The point of these experiments was to study the experience of mind-wandering, which is what we do when we have nothing to do at all. When the subjects were surveyed after their session of enforced boredom – they were shorn of all gadgets, reading materials and writing implements - they reported feelings of intense unpleasantness. One of Wilson’s experimental conditions consisted of giving subjects access to a nine-volt battery capable of administering an unpleasant shock. To Wilson’s surprise, 12 out of 18 male subjects (and 6 out of 24 female subjects) chose to shock themselves repeatedly. “What is striking,” Wilson et. al write, “is that simply being alone with their own thoughts for 15 minutes was apparently so aversive that it drove many participants to self-administer an electrical shock that they had earlier said they would pay to avoid…Most people seem to prefer doing something rather than nothing, even if that something is negative.”

These lab results build on a 2010 experience-sampling study by Mathew Killingsworth and Daniel Gilbert that contacted 2250 adults at random intervals via their iPhones. The subjects were asked about their current level of happiness, their current activity and whether or not they were thinking about their current activity. On average, subjects reported that their minds were wandering – thinking about something besides what they were doing – in 46.9 percent of the samples. (Sex was the only activity during which people did not report high levels of mind-wandering.) Here’s where things get disturbing: all this mind-wandering made people unhappy, even when they were daydreaming about happy things. “In conclusion,” write Killingsworth and Gilbert, “a human mind is a wandering mind, and a wandering mind is an unhappy mind.” Although we typically use mind-wandering to reflect on the past and plan for the future, these useful thoughts deny us our best shot at happiness, which is losing ourselves in the present moment. As Killingsworth and Gilbert put it: “The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost.” 

Given these dismal results, it’s easy to understand the appeal of the digital world, with its constant froth of new information. To carry a smartphone is to never be alone; a swipe of the fingers turns on a screen that keeps us mindlessly entertained, the brain lost in the glowing screen. It’s important to note, however, that Wilson et. al. didn’t find any correlation between time spent on smartphones and the ability to enjoy mind-wandering. Contrary to what Louis C.K. argued, there’s little to reason to think that our gadgets are the cause of our inability to be alone. They distract us from ourselves, but we’ve always sought distractions, whether it’s television, novels or a comic on a stage. We seek these distractions because, as Wilson et. al. write, "it is hard to steer our thoughts in pleasant directions and keep them there." And so our daydreams often end up in dark places, as we ruminate on our errors and regrets. (It shouldn't be too surprising, then, that there's a consistent relationship between mind-wandering and dysphoria.) Here's Louis C.K. once again:

"The thing is, because we don't want that first bit of sad, we push it away with a little phone or a jack-off or the food...You never feel completely sad or completely happy, you just feel kinda satisfied with your product, and then you die. So that's why I don't want to get a phone for my kids.”

One last point. It's interesting to think of this new research in light of religious traditions that emphasize both the struggle of existence and the importance of living in the moment. According to the Buddha, the first noble truth of the world is dukkha, which roughly translates as “suffering.” This pain can't be escaped - everyone dies - but it can be assuaged, at least if we learn to think properly. (The Buddhist term for such thinking, sati, is often translated as mindfulness, or "attentiveness to the present.") Instead of letting the mind disengage, Buddhism emphasizes the importance of using meditative practice to stay tethered to this here now. Because once you admit the big picture sadness, once you accept the inevitability of sorrow and despair, then a wandering mind keeps wandering back to that brutal truth. The only escape is embrace what's actually happening, even if it means sitting in a bare room, noticing the waves of boredom and sadness that wash over the mind. "Let the sadness hit you like a truck," Louis C.K. says, sounding a little bit like a foul-mouthed Buddha. "You're lucky to live sad moments."

The Skin Is A Social Organ

Your body is covered in hairy skin.* Below the surface of this skin are wispy sensory nerves known as a C-fiber tactile afferents, or CTs. These nerves are designed to respond to gentle contact - even the slightest of indentations can turn them on, starting a cascade of electrical signals that ends with a feeling of touch. For a long time, the most notable fact about these nerves was their lack of speed: because CTs had no myelin insulation, they were about 50 times slower at transmitting sensory signals to the brain than myelinated A-fiber nerves. 

And so a simple model of the touch system emerged: we had a fast pathway, modulated by A-fibers, which gave us quick and precise information about the surface of the body. Such a system had an obvious function, allowing us to touch the world, manipulate objects and monitor the body in space.

But if we have this fast sensory system, then why are the vast majority of nerves in hairy skin slow CT fibers? It’s like a customer with a broadband keeping around a dial-up modem, just in case.

In recent years, however, it’s become clear that CT fibers are not merely an archaic back-up or useless redundancy. Rather, they are endowed with their own unique purpose, which is just as essential as the speedy transmission of A-fibers. In a new Perspective published in Neuron, the neuroscientists Francis McGlone, Johan Wessberg and Hakan Olausson lay out the argument. They suggest that a particular kind of Cfiber nerve is largely responsible for the emotional quality of touch, passing along crucial information about the “affective and rewarding properties” of the most tender contact. When we talk about the power of touch – say, the healing properties of a hug, or a gentle caress – we are talking about the powers of these slow nerves.

There are multiple strands of evidence. The first is neurological patients with selective damage to A-fibers, leaving them with a touch pathway composed exclusively of C-fibers. These people are mostly numb. However, this numbness comes with a strange loophole – if their skin is brushed gently at a low velocity (between 1 and 10 centimeters per second), their interior bodies can be filled with pleasurable sensations. The feeling is vague – some patients couldn’t even identify the body quadrant that was being stroked - but everyone felt it.

The second piece of evidence is the inverse situation: patients with a rare genetic mutation that wipes out their C-fiber pathway, so that only A-fibers remain. While these patients have primarily been studied for their inability to feel pain – they are often oblivious to severe wounds, such as that from a broken bone – it turns out that they’re also less likely to experience pleasure from a soft touch

These differences in the function of A and C fibers are echoed in the brain. While skin stroking in normal subjects triggers activation in the somatosensory cortex – the part of the brain that tells us where the sensation is coming from – patients with only C-fibers show a selective activation in the posterior insular cortex and other limbic areas. According to McGlone et. al, this suggests that a class of touch sensitive C-fibers have “excitatory projections mainly to emotion-related” systems in the brain. They are designed to fill us with feeling, not to tell us where in the flesh these feelings are coming from.

This all makes sense, if you think about. We are creatures of touch, naked apes that still enjoy getting groomed. We soothe children with soft strokes and kiss the limbs of lovers; the skin is a social organ. While neuroscience tends to focus on vision and hearing as conduits for social information, McGlone et. al. point that the epidermis is also “the site of events and processes crucial to the way we think about, feel about, and interact with one another.”

These touches are most important during development. As Harry Harlow first observed, the absence of comforting contact is deeply stressful for young monkeys, leaving them with a wound from which they never recover. More recent studies have found that separating infant monkeys from their mother with a transparent screen – they could still hear, smell and see her – led to chronic activation of stress pathways in the brain. The stress was only diminished if the young monkeys were allowed to form “peer touch relationships,” suggesting that physical contact is required for normal brain development. Michael Meaney, meanwhile, has shown that rat pups born to mothers that engaged in lots of licking and grooming were much better at coping with stressful situations, such as the open-field test. They solved mazes more quickly, were less aggressive with their peers and lived longer lives. Meaney argues that these differences are driven by differences in the brain, as rat pups exposed to a surfeit of tender contact have fewer receptors for stress hormone and more receptors for the chemicals that attenuate the stress response.

And then there’s the tragic evidence from early 20th century orphanages and foundling hospitals. In these childcare institutions, there was an intense focus on cleanliness and efficiency. As the psychologist Robert Karen notes, this meant that babies were “typically prop-fed, the bottle propped up for them so that they wouldn’t have to be held during feeding. This was considered ideally antiseptic, and it was labor-saving as well.”

Unfortunately, such routines proved deadly. Although these hospitals supplied infants with adequate nutrition and warmth, they struggled to keep them alive. A 1915 review of ten infant foundling hospitals in the Eastern United States, for instance, concluded that up to 75 percent of the children died before their second birthday. (The best hospital in the study had a 31.7 percent mortality rate.) In fact, it wasn’t until the early 1930s, when pediatricians like Harry Bakwin began insisting that nurses touch the babies that mortality rates declined. The soft touches, carried along by those CT nerves, were a kind of sustenance.

Of course, the newfound recognition of C-fibers doesn’t mean the mystery of emotional touch has been solved. The pleasure of contact isn’t just a bottom-up phenomenon, triggered by some peripheral nerves in the flesh. Rather, it’s entangled with all sorts of higher order variables, from the context of touch to the “relationship of the touchee with the toucher.” If anything, the fact that we’re only now beginning to outline the mechanics of the caress is a reminder that the nervous system is full of unknowns, threaded with wires we don’t understand. Somehow, in the milliseconds after the skin is stroked, we turn that mechanical twitch into a powerful feeling, which eases our anxiety and reminds us why it’s good to be alive.

*The only non-hairy parts of the skin - so-called glabrous skin - are found on the soles of the feet and the palms of the hands. 

McGlone, Francis, Johan Wessberg, and Håkan Olausson. "Discriminative and Affective Touch: Sensing and Feeling." Neuron 82.4 (2014): 737-755.

 

Pity the Fish

Consider the lobster; pity the fish. In his justly celebrated Gourmet essay, David Foster Wallace argued that the lobster was not a mindless invertebrate, but rather a creature capable of feeling, especially pain. Wallace made his case with the brute facts of comparative neurology - lobsters have plenty of pain receptors - but also with anecdotes of the kitchen, as the crustacean resists its boiling death.  "After all the abstract intellection," Wallace writes, "there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience."

I was thinking of Wallace's essay while reading a new paper in Animal Cognition by Culum Brown, a biologist at Macquarie University in Australia. Brown does for the fish what Wallace did for the lobster, calmly reviewing the neurological data and insisting that our undersea cousins deserve far more dignity and compassion that we currently give them. Brown does not mince words:

"All evidence suggests that fish are, in fact, far more intelligent than we give them credit. Recent reviews of fish cognition suggest fish show a rich array of sophisticated behaviours. For example, they have excellent long-term memories, develop complex traditions, show signs of Machiavellian intelligence, cooperate with and recognise one another and are even capable of tool use. Emerging evidence also suggests that, despite appearances, the fish brain is also more similar to our own than we previously thought. There is every reason to believe that they might also be conscious and thus capable of suffering."

What makes this review article so necessary is that, as Brown notes, fish are afforded virtually no protections against human cruelty. They are the most consumed animal; the most popular pet; the only creature for which it’s an acceptable leisure activity to hook them with a metal barb and then reel them, against their frantic wishes, into an environment in which they will slowly suffocate to death, drowning in air.

Such suffering is ignored because we assume it doesn't exist; fish are supposed to be primitive beasts, cold-blooded and unconscious. But Brown gathers a persuasive range of evidence highlighting our error: 

  • Fish are exquisitely sensitive creatures, with perceptual abilities that track (or exceed) those of mammals. 
  • Fish can learn a simple Pavlovian conditioning task - light paired with food - significantly faster than rats and dogs. They also exhibit one-trial learning: pike that have been hooked often become "hook shy" for over a year.
  • Fish have incredible spatial memories. Gobies, for instance, sometimes leap from rock pool to rock pool. "Even after being removed from their home pools for 40 days, the fish could still remember the location of surrounding pools," writes Brown. "This astonishing ability makes use of a cognitive map built-up during the high tide when the fish are free to roam over the rock platform."
  • Fish exhibit social learning. Salmon born in a hatchery can be taught to recognize unknown live prey by pairing them with fish that already feed off the prey. Guppies can pass along foraging routes; some scientists speculate that the recent shift in cod spawning grounds reflects the "systematic removal of older, knowledgeable individuals by commercial fishing."
  • Fish know each other. Guppies can easily recognize up to 15 individuals. If allowed to choose, fish prefer to shoal with fish they have met before.
  • Fish exhibit a high degree of social intelligence.  "If a pair of fish inspects a predator," Brown writes, “they glide back and forth as they advance towards the predator each taking it in turn to lead. If a partner should defect or cheat in any way, perhaps by hanging back, the other fish will refuse to cooperate with that individual on future encounters." Or look at the cleaner wrasse, which removes parasites and dead skin from the surface of "client fish." Each wrasse has a large set of regular customers, who they seek to please in order to ensure return business. "If the cleaner should accidentally bite the client, then the client will rapidly swim away. But the cleaner has a mode of reconciliation; they chase after the distraught client and give them a back rub, thus enticing them to come again." Interestingly, the wrasse are far less likely to nip predatory fish, suggesting that they are able to categorize clients according to their aggressive potential. 
  • Fish build nests and use tools. At least 9000 species of fish construct nests, either for eggs or shelter. Wrasse species often use rocks to crush sea urchin shells; they use anvils to break open shellfish. Meanwhile, cod in the laboratory figured out how to use tiny metal tags embedded in their backs to operate a feeder.
  • Fish rely on the same basic circuitry of nerves to process pain as mammals. This shouldn’t be too surprising: the pain receptors in all vertebrates are descended from an early fishlike ancestor. Furthermore, there’s evidence that fish also respond to pain in a “cognitive sense” – they have an experience of suffering. Brown cites a study showing that fish injected with acetic acid display “attention deficits,” and lose their fear of novel objects. Presumably, he writes, this is because “the cognitive experience of pain is dominant over or overshadows other processes.”

Brown concludes his review by arguing that fish deserve to be included in our “moral circle.” The vertebrate taxa are worthy of the same protections against wanton suffering that we offer to most land based mammals. And yet, Brown readily admits that, given current fishing practices, the “ramifications for such animal welfare legislation…is perhaps too daunting to consider.” Billions of humans depend on fish for sustenance, but there is no way to catch a fish without being cruel. 

My own dietary decisions are harder to defend. I don't fish, but I love to eat them. Wild salmon is my favorite. Brown’s paper reminded me of a wonderful Stanley Kunitz poem, “King of the River.” The poem describes the heroic journey of a Pacific salmon, as it returns to the fresh water of its birth to spawn and die. If Brown makes the empirical case for fish – they know more than we think, they feel more than we want them to - then Kunitz takes us inside the strange mind of the orange fleshed vertebrate, swimming madly upriver, its suicidal trip driven by a familiar mixture of “nostalgia and desire.”  

"A dry fire eats at you.

Fat drips from your bones.

The flutes of your gills discolor.

You have become a ship for parasites.

The great clock of your life

is slowing down,

and the small clocks run wild.

For this you were born."

Brown, C. "Fish intelligence, sentience and ethics. Animal Cognition. June 2014.

The Violence of the Pass

Football is going to change. That much is clear. The correlation between the impacts sustained on the football field and the brain damage of players is no longer just a correlation: it’s starting to look like a tragic cause.

But how is the sport going to change? There will be better helmets, of course, and stricter rules about helmet-to-helmet contact, and more accurate monitoring of head trauma.  We’ll start tracking the linear acceleration (g) of skulls as carefully as we track the stats of quarterbacks.

However, it’s also worth considering the ways in which the concussion crisis will interact with pre-existing football trends. Over the last decade, the single most notable shift within the sport has been the rise of the passing offense, with the amount of passing yards increasing by roughly twenty percent. In 2003, as Ty Schalter notes, only Indianapolis used the shotgun offense more than 30 percent of the time. (Three teams never used the shotgun at all.) By 2012, most teams were approaching a shotgun usage rate of 40 percent or higher.

At first glance, this shift towards passing might seem like an effective response to the concussion crisis. Studies relying on head telemetry data – they use special helmets outfitted with a network of sensors – show that linemen and linebackers sustain, by far, the most sub-concussive hits over the course of a game. (Running backs take the hardest hits.) Less running plays, then, should translate to less wear and tear on the brains of those players brawling at the line of scrimmage. (Pass routes are the only part of the game in which, after five yards, no meaningful contact is allowed; it’s football pretending to be basketball.) When a pass dominant offense takes the field, the game is still violent, but the violence seems contained. More spread equals less smash mouth.  

Alas, a new paper by Douglas Martini, James Eckner, Jeffrey Kutcher and Steven Broglio at the University of Michigan, Ann Arbor, suggests that the rise of the passing offense will do little to quell the concussion crisis. In fact, it might even be making the problem worse. In their study, Martini et. al. tracked 83 high school football athletes using the HITS head impact telemetry system. While most public attention has focused on the brains of NFL players, these highly paid athletes actually represent a very small sliver of those at risk. There are, give or take, a few thousand players on NFL payrolls. There are approximately 68,000 football players at the college level. And there are 1.2 million football players at the high-school level.

The question investigated by these researchers was whether or not offensive style influenced the amount and distribution of head impacts. One team utilized a run-first offense (RFO); the other used a pass-first offense (PFO). The RFO team passed, on average, 8.8 times a game and ran the ball 32.9 times, while the PFO passed 25.6 times and ran 26.3 times. 

So what did they find? The first thing to state is the obvious: football is a contact sport. These 83 teenagers endured 35,681 head impacts over the course of the season; at least six of these impacts resulted in serious concussions. 

What’s more, the different offensive styles resulted in significantly different patterns of impact. The running offense generated about 1.5 times as many total head blows as the passing offense – many of these occurred during practice – while the passing offense generated bigger average blows, especially during the games. This was true across every measure of head impact, from linear acceleration (g) to the overall hit severity profile (HITsp). In short, when teams throw the ball in the air, there are fewer total hits, but each hit is harder, especially for skill position players, such as running backs and wide-receivers. The scientists speculate that the root cause of these differences is simple physics, as players in the pass offense are “able to reach higher running velocities before contacting an opponent than the equivalent RFO athletes… As such, the PFO athletes would have larger initial velocities that resulted in greater deceleration values following impact.” And it’s the deceleration that’s dangerous, as the soft brain lurches into the hard bones of the skull. This helps explain why, in 2012 and 2013, receivers and cornerbacks sustained more concussions than any other positions in the NFL. Their speed across the field more than makes up for their lack of mass.

The larger lesson is that there appears to be a fundamental tradeoff between the frequency of hits in a football game and their magnitude. (More research on this subject is desperately needed - the NFL should install head telemetry units in every helmet.) The passing attack might look less aggressive, but appearances can be deceiving; the elegant throws still end with a cloud of dust. If nothing else, this study is yet another reminder that head violence is an intrinsic part of football, and not a by-product of a particular style of play.  

Martini, Douglas, et al. "Subconcussive head impact biomechanics: comparing differing offensive schemes." Medicine and science in sports and exercise 45.4 (2013): 755-761.


Cohesion, PTSD and War

I’ve been reading Head Strong, an excellent new book by Michael D. Matthews, a professor of engineering psychology at West Point. The book describes the history and future of military psychology, from the birth of intelligence testing during WWI to the next generation of immersive battlefield simulations.

Not surprisingly, the problem of Post-Traumatic Stress Disorder (PTSD) is a recurring theme, as Matthews discusses recent attempts by the Armed Forces to promote resilience. (“The military does a good job of teaching its soldiers to kill. But it does not do a good job of teaching them to cope with it,” he writes.)  Matthews details the Comprehensive Soldier Fitness (CSF) program, based on the work of Martin Seligman, and the unintended consequences of creating weapons systems so effective that they “give the individual soldier the firepower of a traditional squad or platoon.” One potential downside of these new systems, Matthews argues, is that American soldiers will gain the ability to control a large territory by themselves, and thus end up isolated from their comrades. “Soldiers fight for their buddies, who traditionally they could literally reach out and touch,” he writes. While technology makes the dispersal of troops possible, Matthews suggests there will be no substitute for the “physical presence of others,” especially when soldiers are “placed in situations of mortal danger.”

Interesting stuff. But there was one data point in the book that I couldn’t stop thinking about, even though Matthews mentions it almost as an aside. While pointing out that PTSD rates vary widely between military units – the overall rate for deployed soldiers hovers between 10 and 25 percent – Matthews notes that “highly trained and specialized units including SEAL teams, Rangers, and other elite organizations” have proven far more resistant to the disorder. (Their PTSD rates are typically less than five percent.) What makes this statistic even more surprising is that these elite units tend to see frequent and intense combat – in objective terms, they have experienced the most trauma. And yet, they seem the least troubled by its aftermath.

Why are elite units so resilient? There are many variables at work here; PTSD is triggered by a multitude of risk factors. For starters, elite units tend to be better educated and in better physical condition, both of which are correlated with a reduced incidence of PTSD. Self-selection also plays a role: anyone tough enough to become a Ranger or Seal has learned how to handle stress and hardship.

Matthews, however, mentions a protective factor that is often overlooked, at least in popular discussions of PTSD: unit cohesion. According to Matthews, elite units are “highly cohesive"; the soldiers form close relationships, built out of their shared experiences. In Pentagon surveys, they are more likely to agree with statements such as “my unit is like family to me,” or “members of my unit understand me.” 

A series of recent studies backs up Matthews’ argument, highlighting the protective effects of unit cohesion. One analysis of 705 Air Force medical personnel deployed as part of Operation Iraqi Freedom found a “significant linear interaction…such that greater cohesion was associated with lower levels of PTSD symptom severity.” When stress exposure was high, for instance, medics in the most cohesive units reported PTSD symptoms that were approximately 25 percent less severe, at least as measured by the military’s PTSD checklist. Another study of 4901 male personnel from the UK armed services (Royal Navy, Royal Marines, British Army and Royal Air Force) concluded that unit cohesion was associated with significantly lower levels of PTSD and other mental disorders, such as depression. The British scientists end their paper by stressing the importance of fostering unit cohesion among soldiers, given "that so many other factors which have a positive association with higher levels of mental health problems are un-modifiable (for example, family background and exposures on deployment)." When it comes to PTSD, cohesion isn't just an incredibly important variable - it's a variable the Armed Forces can influence.

The explanation for these results is straightforward: in the aftermath of a terrible life event, other people are the best medicine. It doesn’t matter if we’re being helped by another soldier or a loving spouse - it’s really hard to get over the trauma alone. According to a highly cited meta-analysis of the risk factors associated with PTSD, a lack of social support is incredibly dangerous for those dealing with an acute stressor. (Among military subjects, a lack of social support was the single most important risk factor; among civilians, it placed second.) Close relationships, in this sense, are the ultimate coping mechanism, allowing us to survive the worst parts of life.

In some instances, the presence of close relationships seems to matter more than the stressor itself. Consider a natural experiment that took place during World War II, when approximately 70,000 young Finnish children were evacuated to temporary foster homes in Sweden and Denmark. For the kids who stayed behind in Finland, life was certainly filled with moments of trauma and stress — there were regular air bombardments, severe food shortages and invasions by the Soviets and the Germans. Those kids sent away, however, experienced a different kind of stress. Their wartime experiences might have featured less actual war, but the lack of social support would prove, over time, to be even more dangerous. A 2009 study found that Finnish adults who had been sent away from their parents between 1939 and 1944 were nearly twice as likely to die from cardiovascular illness as those who had stayed at home. A follow-up study found that these temporary war orphans also showed higher levels of stress hormone, stress reactivity and depression, sixty years after they’d been separated from their families. Chronic stress sucks. But chronic stress in the absence of supportive relationships can be crippling.

Perhaps this is why soldiers in elite units are so resilient. When the Armed Forces take unit cohesion seriously, they turn out be remarkably good at it, able to create deep, emotional bonds among their members. Over time, these relationships become an essential part of how soldiers cope with the violence. While unit cohesion has traditionally been seen through the prism of combat performance – more cohesive units perform better in battle – it seems likely that the biggest benefits of cohesion come after the war.  

Matthews, Michael D. Head Strong: How Psychology is Revolutionizing War. Oxford University Press, 2013.

Dickstein, Benjamin D., et al. "Unit cohesion and PTSD symptom severity in Air Force medical personnel." Military Medicine 175.7 (2010): 482-486.

The Ritual Effect

Stella Artois is an old beer with a long history. The original brewery (Den Hoorn) was founded in 1336 in Leuven, Belgium. In 1717, Sebastian Artois bought the brewery and promptly renamed it after himself.  The company has been brewing a pale lager ever since.

To celebrate this history, Stella Artois has developed a Nine Step Pouring Ritual. The first step is The Purification, in which the Stella branded chalice is given a cold water bath. Then comes The Sacrifice, as the bartender squanders the first few drops of beer to “ensure the freshest taste.” After that comes The Liquid Alchemy – “the chalice is held at 45 degrees for the perfect combination of foam and liquid” – and The Crown, whereby the chalice is straightened out. The final steps are a blur of movement: there is The Removal, The Beheading – the bartender trims the foam with a knife – The Judgment, The Cleansing and The Bestowal, in which the beer is presented on a clean coaster, with the logo facing outward. 

It’s a silly ceremony, made all the sillier by its Seriousness. And while Stella might like you to think that their pouring ritual is some medieval sacrament invented by trappist monks, it’s actually a fairly recent marketing ploy. (The ritual appears to have been first codified as part of the World Draught Master Competition in the late 1990s.) It’s also a remarkably successful gimmick, helping distinguish Stella from all the other fizzy, refreshing and tasteless beers on the supermarket shelf. According to the experts over at Beer Advocate, Stella is a worse beer than its corporate sibling, Budweiser. (Stella scores a 73, while Bud gets an 80.) And yet, Stella is typically 25 percent more expensive, both at bars and in stores. 

So why am I writing about this mediocre and "reassuringly expensive" beer? A recent paper in Psychological Science, led by Kathleen Vohs at the Carlson School of Management at the University of Minnesota, begins to explain why rituals like the Nine Step Pour are so effective. When acted out, these rituals don’t merely enhance our perception of the brand. They enhance our perception of the beer.

Vohs and her colleagues (Yajin Wang, Francesca Gino and Michael Norton) conducted four separate experiments. In the first experiment, 52 students were randomly assigned to one of two conditions: ritual or no ritual. In the ritual condition, the students were given the following instructions: “Without unwrapping the chocolate bar, break it in half. Unwrap half of the bar and eat it. Then, unwrap the other half and eat it.” In the no ritual condition, students were merely given the candy, without any instructions.

As expected, those in the ritual condition enjoyed the chocolate more than those who simply consumed it. They spent more time “savoring” the candy bar, thought it was more flavorful, and were willing to pay about 75 percent more money for it.

In another experiment, Vohs and colleagues showed that the same logic could be applied to carrots. (This time the ritual consisted of rapping on the desk and taking deep breaths.) Once again, the differences were stark: those assigned to the ritual group reported higher levels of anticipated and experienced enjoyment.  

The last two studies attempted to explain why rituals enhance consumption. Vohs and colleagues showed that “personal involvement” is crucial – watching a stranger perform a ritual with lemonade didn’t make the drink taste better – and that rituals increase our “intrinsic interest” in whatever we’re eating.

This is a nice example of social science clarifying a cultural quirk. After all, rituals are everywhere, especially around food and drink. (There's grace before dinner, the Oreo cookie "twist, lick and dunk," a sommelier presenting the cork, a barista making a pour-over, etc.) Even when the steps themselves are meaningless, they give more meaning to whatever happens next.

Walter Benjamin famously argued that art began "in the service of a ritual," that its "aura" was "embedded" in a larger set of acts and ceremonies. The invention of mechanical reproduction changed all that, Benjamin wrote, "emancipating the work of art from its parasitical dependence on ritual." This came with happy consequences - we could buy a Rothko poster for the bedroom - but it also stripped many products of their artisanal roots. Consider the beer shelf: the same multinational company makes Stella, Budweiser and Corona and they pretty much taste the same. 

I think Benjamin would be amused by the ways in which our age of mass production has returned us to ritual, as we seek to differentiate all these products that aren't very different at all. (Your favorite craft IPA probably doesn't need a nine-step pour.) These rituals pretend to have a function - Stella says it's about getting the fizz right - but they're really there to elevate the ordinary. For a few moments after The Bestowal, as we stare at that logo covered chalice handed to us by the bartender/brand ambassador, it's possible to believe that this generic beer actually has a twinge of aura.

Vohs, Kathleen D., et al. "Rituals enhance consumption." Psychological Science 24.9 (2013): 1714-1721.

A Science of Self-Reports?

In 1975, the psychologists Stephen West and T. Jan Brown conducted an investigation into the factors that made people more likely to help a stranger. What made their study unique is that they conducted the experiment twice, using two different methods.

In the first study, they staged a crisis. Sixty men walking on a college campus were stopped by a woman who made the following request:

“Excuse me, I was working with a rat for a laboratory class and it bit me. Rats carry so many germs – I need to get a tetanus shot right away but I don’t have any money with me. So I’m trying to collect $1.75 to pay for the shot.”

In some conditions, the woman held her hand as if it had been bitten; in other conditions, her fist was wrapped in gauze that had been soaked in artificial blood. Sometimes she wore an “attractive pant outfit and was tastefully made up” and sometimes she wore a blonde wig, white face powder and dark lipstick, “all of which were inappropriate for her natural complexion.”

Not surprisingly, men offered the most help when the woman was attractive and in urgent need of help, giving her an average of 43 cents. (Every man stopped to help in this condition.) In contrast, an “unattractive” woman with a bloody bandage received 26.5 cents on average, and only 80 percent of men offered help. The less severe conditions led to even less assistance: the men donated approximately 13.5 cents with two-thirds providing some amount of money. 

So far, so obvious: when deciding whether or not to help a stranger, the most important variable is the severity of the situation. We might stop for a head-on collision, but not for a fender bender. If you're asking for money, it’s better to be good-looking.

But the most intriguing part of the paper came when the scientists tried to replicate their field study in a lab. Instead of faking an emergency on the street, the sixty male subjects were read a description of the injury (severe/not severe) and shown a photograph of the woman (attractive/unattractive.) Then, the men were asked how much money they would be willing to give her.

In this “interpersonal simulation,” the men were very generous. Interestingly, they gave the woman the most money in the unattractive/severe condition, offering her an average of $1.20, or four and a half times what their peers offered in real life. The same basic pattern persisted across every situation, with the men giving her far larger sums when she was a hypothetical. The lab subjects also insisted they wouldn’t be swayed by her appearance - they said they'd give more when she was less attractive - even though the field test strongly suggested otherwise. West and Brown conclude their 1975 paper with a warning: “The comparison of the results of the field experiment and the interpersonal simulation raise serious questions concerning the validity of the latter approach as a strategy for investigating human social behavior.”

I first learned about this study from a fascinating critique of modern psychology, published in 2007 by the psychologists Roy Baumeister at Florida State University, Kathleen Vohs at the Carlson School of Management, University of Minnesota and David Funder at the University of California, Riverside. In “Psychology as the Science of Self-Reports and Finger Movements,” Baumeister, et al. hold up the results of the West/Brown study as an example of the unsettling discrepancy between what we think we’ll do and what we actually do. Because it turns out that such discrepancies are a recurring theme in the literature. For instance, Baumeister, et al. note that “affective forecasting studies” – research in which people are asked how they will feel if x happens – “systematically show the inaccuracies of people’s predictions” about their own future emotions. Meanwhile, financial decision-making research reveals that people are “moderately risk averse” when dealing with pretend money, but become far more risk averse when large amounts of real cash are involved. Other experiments show that merely asking people about their preferences can alter their preferences; the act of introspection has a distorting effect. As the psychologist Timothy Wilson famously argued, we are all “strangers to ourselves.”

And yet, despite this surplus of evidence, Baumeister and colleagues document a steady decline in the percentage of studies that actually look at behavior, and not just our predictions of it. Here’s the trendline of research published in the elite Journal of Personality and Social Psychology over the last forty years:

As the psychologists note, this is a troubling situation for a science that is typically described as the study of human behavior. Instead of observing humans in vivo, the vast majority of these papers rely on questionnaires, tests and stimuli flashed on computer screens. Subjects predict their actions rather than act them out. But Baumeister et al. point out that such methodologies leave out a lot of the complexity that make people so interesting. In fact, many of the canonical studies of modern psychology, such as the Milgram study, Stanford Prison experiment and Mischel's Marshmallow task, derive their power from the contradiction between predicted behavior - I wouldn't do that! - and our actual behavior. What's more, the "eclipse" of behavioral studies is inevitably shrinking the range of possible psychological subjects, as much of human nature cannot be easily reduced to a self-report. Here are the scientists, getting frisky:

“Whatever happened to helping, hurting, playing, working, taking, eating, risking, waiting, flirting, goofing off, showing off, giving up, screwing up, compromising, selling, persevering, pleading, tricking, outhustling, sandbagging, refusing, and the rest? Can’t psychology find ways to observe and explain these acts, at least once in a while?”

There are, of course, a number of factors behind this shift away from behavior. Field studies are riskier and more expensive; internal review boards are more likely to object to behavioral experiments, as they might upset subjects; in the 1970s, peer-reviewed journals began explicitly favoring psychology articles with multiple studies and, as Baumeister et al. note, “it is far easier to do many studies by seating groups in front of computers…than to measure behavior over and over.” 

Again: there is nothing wrong with self-reports. In their paper, Baumeister, Vohs and Funder repeatedly emphasize the value of non-behavioral research, especially for certain subject areas. However, the shortcomings of this approach have also been clearly established – when we talk about ourselves, we often don’t know what we’re talking about.

Baumeister, et al. don’t sound very optimistic that this experimental trend can be reversed. (They call for an “affirmative action for action,” with journals and funding agencies giving “a little extra preference” to papers and proposals that measure behavior.) In the meantime, perhaps we should all just remember the intrinsic limitations of studies that rely exclusively on self-reports. It’s a limitation to keep in mind when reading the papers themselves and when reading blog posts about such papers. 

West, Stephen G., and T. Jan Brown. "Physical attractiveness, the severity of the emergency and helping: A field experiment and interpersonal simulation."Journal of Experimental Social Psychology  11.6 (1975): 531-538.

Baumeister, Roy F., Kathleen D. Vohs, and David C. Funder. "Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior?." Perspectives on Psychological Science 2.4 (2007): 396-403.

Materialism and Its Discontents

“To do or to have?” That Hamlet-like question is the title of a scientific paper by Leaf Van Boven and Thomas Gilovich, published several years ago in The Journal of Personality and Social Psychology. It’s a simple paper, just a few pages long, but I doubt there's another piece of social science that I think about more during a typical day. In essence, the scientists tried to solve the problem of scarce resources. If our goal is to maximize happiness, then how should we spend our money? Should we buy things? Or should we buy experiences?

At first glance, the answer seems obvious – buy things! Things last! We can return to things. Experiences, on the other hand, are inherently ephemeral; they can only be consumed once. Buying an experience is like setting money on fire. 

But this intuition is exactly backwards - the person who wants more toys has misunderstood the nature of happiness. Van Boven and Gilovich demonstrated this by conducting a number of straightforward experiments. In one survey, they asked people to describe a recent purchase that was made with “the intention of advancing your happiness and enjoyment in life.” It turned out that those who described the purchase of an experience, such as a music concert or trip to the beach, reported much higher feelings of happiness than those who purchased objects. They were more likely to consider the money well spent and less likely to wish they’d bought something else instead. Similar results emerged from follow-up surveys, as reminding subjects of a recent “experiential purchase” made them happier than reminding them of a recent material purchase. Most impressive, perhaps, is that this effect seems to increase over time. While objects depreciate - we habituate to their delights - experiences become even more valuable, as we return again and again to the pleasurable memory. (The scientists refer to this as the process of positive reinterpretation.) One lasts, the other doesn’t. But what lasts isn’t what we can hold in the hand. 

I bring this paper up because, in the last year or so, there have been a number of very interesting studies on materialism and its discontents. While Van Boven and Gilovich showed that purchasing experiences made us happier, these new studies help reveal why purchasing things does not. They expose the heart of darkness inside every mall.

  • Marsha Richins, in the Journal of Consumer Research, showed that “high materialism consumers” typically experience a post-purchase hangover. While they were extremely excited about the object before they bought it – they imagined all the ways it would make their lives better – that excitement quickly dissipated once they actually possessed the object. According to Richins, this disappointment is rooted in a false belief among the most materialistic shoppers that “purchase of the desired product will transform their lives in significant and meaningful ways…For these consumers, the state of anticipating and desiring a product may be inherently more pleasurable than product ownership itself.” 
  • A team of psychologists conducted three longitudinal studies looking at the relationship between materialism and well-being. The results were clear-cut: “Across all three studies, results supported the hypothesis that people’s well-being improves as they place relatively less importance on materialistic goals and values, whereas orienting toward materialistic goals relatively more is associated with decreases in well-being over time.” In their most interesting experiment, the psychologists exposed a sample of “highly materialistic US adolescents” to a financial education program called “Share Save Spend,” which encourages people to balance spending with sharing and saving. Those teens randomly assigned to the intervention showed a decrease in materialism and an increase in self-esteem. They bought less, and thought better of themselves.
  • In the journal Communication Research, a group of Dutch psychologists help reveal the roots of materialism. They place the blame, at least in part, on advertisements targeting children, noting that kids who saw the most ads were also the most materialistic. This new study builds on previous work by the lead researcher, Susan Opree, which suggested that the “material values portrayed in advertising teach children that material possessions are a way to cope with decreased life satisfaction.”
  • A new study led by psychologists at Baylor University found that people who scored high on measures of materialism were also less grateful for what they had. According to their statistical analysis, this lack of gratitude was largely responsible for the observed relationship between materialism and decreased life satisfaction.

Taken together, the psychological literature on materialism is a fairly persuasive critique of modern capitalism, which conditions us to seek happiness in all the wrong places. That said, I’m most intrigued by a 2013 study on materialism and loneliness by Rik Pieters at Tilburg University, if only because his study complicates, ever so slightly, the strong version of the anti-materialism argument. It shows that materialism is usually a terrible way to seek life-satisfaction, but that it’s not always terrible. Some materialists live delighted lives. 

First, a brief taxonomy. It’s generally recognized that there are three subtypes of materialism. The first is material measure, which is the tendency to see possessions as a status signal or sign of success. (You buy the Porsche because it shows you can afford it.) The second is material medicine, in which purchases are seen as a quick way to elevate levels of future happiness. (You buy the Porsche because you believe the car will make your future self content.) Lastly, there’s material mirth, a world-view in which material possessions are believed to be part of the good life. (You buy the Porsche because it’s a beautiful car.)

Pieters was interested in the causal relationship between materialism and loneliness, as numerous studies have quantified the severe negative consequences of the lonely life. (According to one recent study of older people led by John Cacioppo, feelings of extreme loneliness increase the risk of premature death by 14 percent, which is roughly twice the impact of obesity.) Although it’s often speculated that materialism causes loneliness – our obsession with things leads us to neglect our relationships – Pieters wondered if the “influence might also run in the opposite direction.” Perhaps we aren’t lonely because we’re always shopping. Perhaps we shop because we’re always lonely.

To untangle this causal knot, Pieters collected data from 2,500 consumers between 2005 and 2010. He gave them standard surveys to measure materialism and its subtypes, asking people to rate, on a scale from 1 to 5, the extent to which they agreed with a series of statements about shopping and happiness. (“I like to own things that impress people,” “I like a lot of luxury in my life,” “Buying things gives me lots of pleasure,” etc.). They were also assessed in terms of loneliness, and asked whether or not they agreed with sentences about their social life. (“I feel in tune with the people around me,” “There is no one I can turn to,” “I feel left out,” etc.) By studying the ebb and flow of materialism and loneliness over time, Pieters was able to detect some interesting statistical relationships.

His most important finding was that materialism and loneliness often exist in a so-called vicious cycle, so that materialistic tendencies make us feel lonely, which leads us to seek comfort in purchases and possessions, which only makes us feel even lonelier. It’s a downward spiral that ends with lots of misery and credit card debt. Interestingly, loneliness seemed to have a bigger causal effect on materialism than materialism did on loneliness. This suggests that the best way to escape the “materialistic treadmill” is to make some new friends. 

But there’s an interesting exception to the rule. While two subtypes of materialism were locked in a vicious loop with loneliness – the worst was material medicine, followed by material measure - there was one subtype of materialism that was actually associated with reduced feelings of loneliness. Those who score high in material mirth, Pieters writes, are those who “derive pleasure from the process of buying things,” enjoy spending money on “things that are not practical,” and like “a lot of luxury in life.”

Why is this mindset so much more effective? Nobody really knows. Pieters speculates that part of the answer has to do with intrinsic motivation, as those high in mirth tend to buy things for the simple reason that buying things is fun. Their materialism is not about impressing others, or improving the mood of a future self – it’s about the sheer delight of spending money. Such an attitude, Peters writes, might spill over and “indirectly improve social relationships,” as mirthful people also tend to lavish cash on family vacations, nice meals and other shared experiences. 

Perhaps. Or maybe those merry materialists just like what they bought. Here's the great Frederick Seidel, the poet laureate of material mirth, writing about his new Ducati motorcycle in a poem called "Fog":

I spend most of my time not dying.
That’s what living is for.
I climb on a motorcycle.
I climb on a cloud and rain.
I climb on a woman I love.
I repeat my themes.

Here I am in Bologna again.
Here I go again.
Here I go again, getting happier and happier.

The motorcycle, says Seidel, is not merely a thing. It's an experience. If we live our life right, what's the difference?

Van Boven, Leaf, and Thomas Gilovich. "To do or to have? That is the question." Journal of Personality and Social Psychology 85.6 (2003): 1193.

Pieters, Rik. "Bidirectional Dynamics of Materialism and Loneliness: Not Just a Vicious Cycle." Journal of Consumer Research 40.4 (2013): 615-631.

 

Why Do We Watch Sports?

Why do we watch sports? It's a simple question with a complicated answer. Sports are a huge entertainment business – the NFL alone generates at least $7 billion a year in television revenue  – so it’s easy to lose sight of their essential absurdity. In essence, we are watching freakishly large humans in tight polyester outfits play with balls. They try to get these balls into cups, goals, baskets and end zones. It's a bizarre thing to get emotional about. 

There's no shortage of social science that tries to pin down the appeal of sports. There's the tribal theory, and the mirror neurons cavort, and the patterning hypothesis, which argues that sports take advantage of our tendency to hallucinate patterns in the noise. (Slot machines are fun for the same reason.) All of these speculations are probably a little bit true. 

But I'm most intrigued by the so-called talent-luck theory, which was first proposed by the UCSD psychologist Nicholas Christenfeld in 1996. (His short paper has only been cited a single time, but I think it’s a brilliant little conjecture.) Here's the model in short form: humans like watching feats of physical talent, but we still want to be surprised. As a result, the most successful sports (i.e., those on Sportscenter) have found a way to engineer an ideal balance of skill and randomness. Thanks to chance, the underdog (which is a polite way of saying the less talented team) still has a chance. 

So what’s Christenfeld’s evidence? He relied on a popular statistical measure known as the split half reliability coefficient. The measure is often used when assessing the reliability – that is, the internal consistency – of a psychological test. Let’s say, for instance, that you’ve developed a new cognitive assessment designed for NFL quarterbacks. In order to measure the internal consistency of the test, you should randomly divide the questions into two groups. The split-half reliability is a measure of the correlation between the scores of the different groups, with higher correlations signaling higher test reliability. (The best tests are said to “hang together.”) In other words, if the quarterbacks performed equally well on both halves of the test, then the test is probably measuring something, even if we still don’t know what that something is.

Christenfeld realized that this common statistical tool could be used to assess the reliability of various professional sports, including baseball, hockey, soccer, basketball, football and rugby. He randomly divided each of their seasons in half and then computed their split-half reliability. To what extent did a team’s success in half of its games predict its success in the other half? 

The first thing Christenfeld discovered is that different sports generate very different reliabilities on a per game basis. Baseball, for instance, has a single game reliability of 0.008. If that seems low, it’s because it is – the NBA is roughly eleven times more reliable on a per game basis than MLB. (Hockey is smack in the middle, while the NFL has the highest single game reliability rating of any major American sports league. Only rugby is more predictable.) When I tell Christenfeld that I’m impressed by the unpredictability of baseball, he notes that the randomness is rooted in the basic mechanics of the sport, as the difference between a triple down the line and a double play is often just a few millimeters on a bat. “There is also no partial credit in baseball,” he says. “A hitter doesn’t get partial credit for hitting the warning track.” The end result is that success in America’s game is an all-or-nothing proposition, which increases the noisiness of victory. (As Christenfeld notes, sports that are more reliable, such as football, do give partial credit for performance: “Football has field position,” he says. “Even if you don’t score, assembling a long drive still has benefits.”)

But this doesn’t mean baseball is all luck and noise. Instead, Christenfeld points out that randomness of a single baseball game is balanced out by the fact that the baseball regular season is 162 games long, or ten times longer than the football season. What’s more, Christenfeld found the same pattern in every sport he looked at, so that season length was always inversely related to reliability. “The sports whose single games reliably assess talent have short seasons, while those whose games are largely chance have long ones,” Christenfeld wrote in his Nature paper. “Thus these sports, differing enormously in their particulars, converge towards the same reliability in a season.” Christenfeld then goes on to argue that season length is not an “arbitrary product of historical, meteorological or other such constraints.” Rather, it is rooted in the desire of fans to witness a “proper mix of skill and chance.”

I find this paper fascinating for a few reasons. For starters, it clarifies the appeal of sports. Although sabermetricians have gotten far better at measuring various kinds of athletic talent, from DVOA to PER, the entertainment value of sports is inseparable from the fact that the talent of players is intentionally constrained by the rules of the game. “If sports were pure contests of skill, then they’d quickly become genetic tournaments,” Christenfeld says. “But that’s not much fun to watch.” As a result, the most successful sports have evolved rules to encourage what Christenfeld calls an “optimal level of discrepancy.”

This model also comes with practical consequences, helping us evaluate potential rule changes to a given game. More instant replay? That will increase reliability, which might be good for baseball, but bad for rugby. What about changing the requirements of women’s tennis, so that players have to win the same number of sets as men? “The data suggest that women’s tennis is more reliable” – the best players are more likely to win – “so I’d guess that adding another set would make it too reliable,” Christenfeld says. Should we shorten the baseball season, as many fans and commentators have proposed? Since baseball already has the lowest season-length reliability of any major sports league, that’s probably not a good idea. “You never want the outcome to feel arbitrary,” Christenfeld says. 

The NBA is probably the sport most in need of Christenfeld’s advice. According to his data, the season reliability of basketball is 0.890, which is far higher than the NFL’s season reliability of 0.681. Such reliability manifests itself as a competitive imbalance, as the best teams routinely dominate their lesser opponents. While the imbalance of the NBA is caused, at least in part, by "the short supply of tall people" - that, at least, is the conclusion of a 2005 paper led by the economist David Berri - these human factors are exacerbated by the league rules.  “I think it’s pretty clear that the second half of the [NBA] season should be shorter,” Christenfeld says. “The history of basketball is the history of basketball dynasties. There are way too many games where the outcome is predictable.”

And then there is the larger lesson of Christenfeld’s research, which concerns the difficulty of managing the competing claims of talent and equality. If talent is fairly rewarded – i.e., LeBron James gets paid what he deserves – then inequality increases and NBA underdogs are even less likely win. To deal with this problem, most sports leagues impose salary caps on their teams, as they attempt to shrink the gap between the best and the worst, the richest and the poorest. Such parity makes the sport less predictable and more exciting; LeBron is underpaid for the good of the game.

In real life, of course, we’re not concerned about upsets and underdogs – we care about social mobility. We don’t seriously consider salary caps – we talk about marginal tax rates. Nevertheless, the basic tensions remain the same. While we want our society to be relatively reliable – every “game” should be a measurement of skill – we also don’t want a perfect meritocracy, for that creates a level of inequality that feels unfair. It’s also de-motivating, and can create a feedback loop in which the “underdogs” are even less likely to compete in the first place. If talent always win, there’s no reason to play. 

Christenfeld, Nicholas. "What makes a good sport." Nature 383.6602 (1996): 662-662.

 

Thank You For Reading

Welcome to my blog. Thank you for reading. I hope this will be a place where I can write about scientific research that interests me.

I also hope this blog can be a small step towards regaining the trust of my readers.

A quick note: when possible, all material will be sent to the relevant researchers for their approval. If that’s not possible, an independent fact-checker will review it.

Please contact me with any corrections or suggestions: jonah.lehrer@gmail.com