The Psychological Benefits of Horror Movies

I’ve never understood the appeal of horror movies. The world is terrible enough — why pay money to endure more terror? These days, I’m all about counter-programming reality, soothing my amygdala with various forms of the marriage plot, preferably involving Emma Thompson. 

And yet, there’s new evidence that my escapist strategy is the wrong approach. Research by Coltan Scrivner, John Johnson, Jens Kjeldgaard-Christiansen and Mathias Clasen suggests that the best way to deal with our pandemic anxiety is to lean into it, seeking out bleak entertainments that match the headlines. Horror movies, according to this theory, are a kind of emotional practice. The pretend terrors help us cope with real ones.

The study was conducted in April, during the uncertain early days of the pandemic. Subjects were asked to rate their interest in a long list of scary genres, from zombie flicks to post-apocalyptic thrillers, pandemic films to alien invasion movies. They were also asked questions about their psychological resilience (“I believe in my ability to get through these difficult times”) and preparedness for the pandemic (“I was mentally prepared for a pandemic like the Covid-19 pandemic”).

Sure enough, different kinds of scary movies came with different psychological benefits. Fans of straight horror were less distressed by the pandemic. (They made it through The Shining; they can make it through Covid.) On the other hand, fans of so-called prepper movies—this includes zombie, apocalyptic and alien invasion narratives—felt more prepared for the pandemic, and reported fewer negative disruptions. (Zombies taught them what supplies to buy at Costco.) More generally, people who exhibited high levels of “morbid curiosity” showed higher levels of positive resilience during the pandemic. Being interested in dark stories taught them how to deal with dark times.

It’s a nice correlational study, and builds on previous research showing the potential benefits of fictional narratives. In particular, scientists have repeatedly shown that stories with complicated characters can enhance our theory of mind skills, helping us get better at interpreting the thoughts and feelings of other people. Cecilia Hayes, an anthropologist at the University of Oxford, compares mindreading to print reading, noting that both are part of our cultural inheritance, and not hard-wired into the infant cortex. Print reading takes years of “scaffolding and explicit instruction”—we have to be taught the alphabet and phonics. Mindreading almost certainly requires the same sort of training. As Hayes notes, human babies are no better at mentalizing than chimp babies. The difference is that we grow up in environments rich with stories that take us inside the minds of imaginary people. Fiction, in this sense, is an essential human technology, augmenting our natural mental abilities.

The same logic seems to apply to horror movies. As Scrivner et. al write:

“One reason that horror use may correlate with less psychological distress is that horror fiction allows its audience to practice grappling with negative emotions in a safe setting. Through fearing the murderer or monster on the screen, audiences have an opportunity to practice emotion regulation skills. Experiencing negative emotions in a safe setting, such as during a horror film, might help individuals hone strategies for dealing with fear and more calmly deal with fear-eliciting situations in real life.”

Here’s my question: does reading or watching terrifying non-fiction—say, stories about the Holocaust, Gulag and Black Plague—also confer resilience? Or is there something special about the artifice of art, the way it titillates our emotions with scary music, gory close-ups and buckets of red corn syrup? A terrifying subject might not be enough. We might need to feel the terror for ourselves, which is something horror movies are particularly good at.

Life is unpredictable. Who knew that 2020 would be such a disaster show? While we typically treat stories as a pleasurable distraction—Netflix is a vacation from the universe—this research suggests that narratives actually prepare us for adversity, allowing us to simulate all sorts of calamities without ever leaving the comfort of the couch. 

Scrivner, Coltan, et al. "Pandemic practice: Horror fans and morbidly curious individuals are more psychologically resilient during the COVID-19 pandemic." Personality and Individual Differences (2020): 110397.

How To Fix the Smartphone

The astragalus is the heel bone of a running animal. It’s an elegant part of the skeleton, so curved it looks carved, with four distinct sides. It fits in the palm of your hand.

The astragalus is also one of the most common archaeological artifacts, found in ancient dig sites all over the world. The bones have been uncovered in Greek temples and Mongolian villages, Egyptian tombs and Native American cave dwellings. In Breughel’s masterpiece “Children Games,” two women toss astragali in the corner of the painting. They look like they’re having fun.  

The women tossing astragali in Breughel’s “Children Games”

The women tossing astragali in Breughel’s “Children Games”

Why are these small animal bones such a universal relic? The answer returns us to the peculiar shape of the astragalus. Because it has four sides, the bone can be used like dice: when thrown on a flat surface, it turns into a primitive randomizer, injecting a dose of uncertainty into the game. As the science historian Ian Hacking writes, these dice made of skeletons are so ubiquitous that “it is hard to find a place where people use no randomizers.” 

Of course, we don’t throw bones anymore. Now we have more advanced sources of randomness. Just look at slot machines, those money-sucking devices that enchant people with their unpredictable rewards. Although we know the games are stacked against us, we can’t resist the allure of their intermittent reinforcements. 

Or consider the smartphone. If the reward of slots is the rare jackpot, the reward of these devices is the arrival of a notification. As noted in a new paper by Nicholas Fitz and colleagues in Computers in Human Behavior, “In less than a decade, receiving a notification has become one of the most commonly occurring human experiences. They arrive bearing new information from or about a person, place, or thing: a text from your mom, news about Donald Trump, or a calendar invite for a meeting.” The ancients tossed animal bones to experience the thrill of random rewards. All we have to do is glance at these gadgets in our pockets.

There’s nothing inherently wrong with notifications. Unfortunately, their intermittent delivery (and the way they are constantly evolving to become more salient and sticky) creates a digital system that sucks up our attention, which is why the typical America spends 3 to 5 hours a day staring into small shiny screens. The end result is a permanent state of distraction, a mental life defined by its addictive interruptions.

Is there a better way? This urgent question is the subject of that new paper by Fitz et al. The scientists explore the potential benefits of creating smartphone notifications that are batched and predictable, arriving at regular intervals throughout the day. If our current smartphone experience is like a pocket slot machine, every random beep another reward, these batched notifications try to remove the twitchy uncertainty. We know exactly when the rewards will arrive, which will hopefully make them far less exciting.

To test the effectiveness of this setup, Fitz et al. recruited 237 smartphone users in India. Each of the users was randomly assigned to one of four conditions: 1) notifications received as usual 2) notifications batched every hour 3) notifications batched three times a day 4) or no notifications at all. The conditions were implemented using a custom-built Android app.

Screen+Shot+2019-08-02+at+4.13.36+PM.jpg

Which setup worked best? It wasn’t close—batching notifications into three predictable intervals led to improvements across a wide range of psychological outcomes. (Hourly batching was less effective, though it did lead people to feel less interrupted by their phones.) According to the data, those who got three batches reported less inattention, more productivity, fewer negative feelings, reduced stress and increased control over their phone. They also unlocked their phones about 40 percent less often.

Interestingly, silencing all notifications tended to backfire, boosting anxiety without any parallel benefits in focus. (People were still distracted, just by their FOMO, not their gadgets.)

This research comes with enormous practical implications. In a little over a decade, the smartphone has transformed the nature of human attention, consuming gobs of our mental bandwidth. It’s a consumption we often underestimate. According to Fitz et al., most people think they get about thirty notifications per day. The reality is far worse, with the typical subject receiving more than sixty beeps, pings and buzzes. But if you ask them how many notifications are ideal, they give an answer closer to fifteen. In other words, we desire technology with limits, a smartphone that shields us from its own appeal.

And this brings us back to the power of intermittent reinforcement. Randomness has always been entertaining. The difference now is that we’ve engineered a technology that’s simply too irresistible—software evolves far faster than our hardware—which is why we end up spending more time staring at our phones than we do parenting, exercising or eating combined.

But it doesn’t have to be this way. One day, a gadget maker will give people what they really want: a machine that doesn’t hijack the brain. Based on this paper, a core element of this future gadget will be a default notification system that delivers its interruptions in predictable batches. That text can wait; so can the update from the Times and Twitter; we don’t need to know who liked our Instagram in real time.  

Sometimes, less is so much more.

Fitz, N., Kushlev, K., Jagannathan, R., Lewis, T., Paliwal, D., & Ariely, D. (2019). Batching smartphone notifications can improve well-being. Computers in Human Behavior.

The Scars of Separation

The question of what happens when you separate young children from their parents is not a political one. It’s a scientific one.

And the answers are tragic.

In 1937, John Bowlby began working at the Child Guidance Clinic, a mental hospital for youth in North London. One of the children under Bowlby’s care was a six-year old named Derek, who had been sent to the Clinic for “persistent thieving, truancy and staying out late.” At first glance, Derek’s childhood appeared normal and happy; his parents were loving and well-adjusted. However, Derek’s medical file contained one notable event: when he was eighteen months old, Derek was hospitalized for nine months with diptheria. He was completely isolated from his family. According to Derek’s mother, this separation changed her son. When he returned home, he called her “nurse” and refused to eat. “It seemed like [I was] looking after someone else’s baby,” she said.

Derek’s story led Bowlby to review the histories of his other thieving patients. What he discovered next would define the rest of his career. According to the case files, approximately 85 percent of “affectionless” children prone to stealing had also suffered, like Derek, from a prolonged separation in early childhood. This became their defining trauma. These kids stole candy and toys and clothes, Bowlby argued, to fill an emotional void. “Behind the mask of [affectionless] indifference,” he wrote, “is bottomless misery.” 

Bowlby was haunted by this correlation between separation from loved ones and emotional damage. During World War II, Bowlby followed the reports from wartime orphanages. He spoke often with Anna Freud, Sigmund’s youngest daughter and the head of the Hampstead War Nursery, who described the “severe deterioration” of the kids in her care. In many instances, the toddlers were simply not able to cope with the sudden absence of their family. Patrick, for instance, was a three-year old boy whose mother had to work in a distant munitions factory. The boy was distraught, but he refused to cry because his parents said they wouldn’t visit if he cried. And so Patrick constructed an elaborate routine instead, telling himself over and over again that “his mother would soon come for him, she would put on his overcoat and would take him home with her again.” As the days turned into months, Patrick’s monologue became increasingly detailed and desperate: “She will put on my overcoat and leggings, she will zip up the zipper, she would put on my pixie hat.” When the nursemaids asked Patrick to stop talking, he began mouthing the words silently to himself in the corner.

These stories made Bowlby determined to conduct his own study on the “effect on personality development” of an extended separation between children and parents. His subjects were patients in the pediatric wards of hospitals. At the time, British doctors enforced a strict visitation policy, as frequent family contact was believed to cause infection and emotional neediness. Most hospitals limited parental visits to a single hour on Sundays, with no visits allowed for those under the age of three.

It didn’t take long before Bowlby realized that these separations were traumatic. What’s more, the trauma followed a predictable arc, much like the progression of a physical disease. (Bowlby would later compare the damage of separation to a vitamin deficiency, in which the lack of an “essential nutrient” causes permanent harm.) When first left alone at the hospital, the children collapsed in tears and wails; they didn’t trust these strangers in white coats. Their violent protest, however, would soon turn into an eerie detachment, especially if the separation lasted for more than a week. Instead of crying, the children appeared withdrawn, resigned, aloof. It was as if they had forgotten about their parents entirely. The hospital staff referred to this phase as “the settling down.” Bowlby called it despair.

He was right. One of the recurring themes of attachment theory—the psychological theory Bowlby pioneered with Mary Ainsworth and others—is the enduring damage of early separation from one’s parents. (A Book About Love covers attachment theory in detail.) Consider the results of a natural experiment that took place during World War II, when more than 70,000 young Finnish children were evacuated to temporary foster homes in Sweden and Denmark. For the kids who stayed behind in Finland, life was filled with moments of acute stress—there were regular air bombardments and invasions by the Russians and the Germans. But for those sent away, the stress of being separated from their parents was unceasing. They lacked what they needed most.

This early shock had lifelong consequences. A 2009 study found that Finnish adults who had been sent away from their parents between 1939 and 1944 were nearly twice as likely to die from cardiovascular illness as those who had stayed at home. Although more than 60 years had passed since the war, these temporary orphans were also significantly more likely to have high blood pressure, type 2 diabetes, elevated levels of stress hormone and severe depressive symptoms.

The pragmatist philosopher Richard Rorty argued that the ultimate goal of liberalism was the elimination of cruelty. For Rorty, cruelty wasn’t just the infliction of suffering—it meant making others suffer while ignoring their plight. It meant choosing not to care, usually because we focus on our “traditional differences (of tribe, religion, race, customs, and the like)” rather than our “similarities with respect to pain and humiliation.”

The science of attachment theory is a powerful reminder of those similarities. It doesn’t matter where a child comes from: we know what will happen when we separate them from their parents. We are causing pain that lasts, inflicting wounds that might never heal. 

 It is the very definition of cruelty.

Bowlby, John. "Maternal care and mental health: A report prepared on behalf of the World Health Organization as a contribution to the United Nations programme for the welfare of homeless children." (1952).

What Tennis Can Teach Us About Technology

In the winter of 1947, Howard Head, an aerospace engineer, was skiing down Stowe Mountain when he decided that wooden skis were a terrible idea. He kept tripping on the long hickory blades; the material was too heavy for such a nimble sport. On the train back to Baltimore, Head began sketching out a new ski made out of airplane parts, focusing on the aluminum alloys and plastic laminates used to construct the fuselage. It took a few more years of stubborn tinkering—Head’s first 45 prototypes came apart on the slopes—but by the 1960s his aerospace skis were dominating at the Olympics. 

In 1972, Head sold his skiing company and settled into retirement. To stay in shape, Head started taking tennis lessons. However, he soon realized that he wasn’t very good at the game; his shots careened all over the court. Head could have practiced more, but that didn’t seem very fun, so he decided to fix the tennis racket instead. At the time, virtually all rackets were made of wood, with an elliptical surface area of roughly 70 inches. While companies had experimented with slightly larger rackets—more surface area meant a bigger sweet spot—they were ultimately constrained by the weight of wood. 

Head’s insight, which he laid out in a 1974 patent application, was that new materials could eliminate these tradeoffs. Head settled on a composite blend of carbon fiber and resin, which allowed him to create a racket with 40 percent more surface area and a far larger sweet spot. The end result was a piece of equipment that made the game much easier for amateurs. 

Screen Shot 2018-04-16 at 9.15.37 AM.png

In 1978, Head got one of his new oversized rackets into the hands of a talented 16-year old named Pam Shriver. Although Shriver entered the U.S. Open unseeded, she ended up beating Martina Navratilova in the semifinals. Pros took notice: by 1984, composite rackets had taken over the tour. (John McEnroe was the last player to win a major tournament with a wooden racket, beating Bjorn Borg at the 1981 U.S. Open.) You can see the triumph of composites in this chart:

Screen Shot 2018-04-18 at 2.05.38 PM.png

This is a portrait of technological disruption. As such, the game of tennis in the post-composite age provides an ideal case-study with which to investigate the impact of innovation. That, at least, is the premise of a new working paper by the economists Ian Fillmore and Jonathan Hall. By looking at every men’s professional tennis match played between 1968 and 2014, Fillmore and Hall were able to map out the unlikely impact of Howard Head’s invention. 

The first thing they found is that composite rackets dramatically shifted the demographics of the pro tour. In the mid-1970s, when wooden rackets still dominated, nearly 20 percent of all matches featured a player above the age of 30. By 1990, that number had shrunk to roughly 5 percent of matches.

The youngest players made up the difference. Between 1975 and 1984, the percentage of matches involving players under the age of 21 nearly tripled, to 30 percent. 

Why were composite rackets so hard on the oldest players? Head, after all, invented the composite racket to help old guys like himself hit good shots; it was supposed to level the playing field. And yet, his invention ended up doing the exact opposite, tilting the competitive balance in favor of youth. 

To understand how this happened, it’s important to look at how composite rackets were actually used on the pro tour.  While Howard Head was trying to create a bigger sweet spot for amateurs, professionals didn’t really need a bigger sweet spot. Instead, they used these new rackets to give their shots more topspin. As noted by the physicist Rod Cross, the best players using wooden rackets were only able to impart a topspin of about 200 rpm on a typical forehand. However, the increased width of the composite racket meant that players could attack the ball at a more extreme angle, and thus generate 1000 rpm of topspin. That’s a five-fold increase in ball rotation, which allowed them to hit with more velocity, and to create higher bounces that are harder to return. (Just imagine the consequences for baseball if pitchers invented a breaking ball with five times more spin. The home run would soon become a relic.)

What does topspin have to do with older players? To take full advantage of Head’s innovation, experienced players had to quickly learn a new set of tactics and techniques. They had to change their grips, stances, and swings to maximize spin. Interestingly, the increased experience of veterans seemed to interfere with this adjustment, which is why their exit rate from the tour doubled between 1970 and 1984. Head invented the composite racket to make it easier for older players like himself to hit good shots. He ended up ridding the pro tour of nearly everyone over 30, at least for a generation of players.

There’s a larger lesson here: when it comes to technological disruption, we inevitably fail to anticipate the real-world consequences. The public sphere is lousy with pundits and prophets, boldly forecasting the future impact of the latest technology. Winners, losers, etc. But what history teaches us is that nobody knows anything.

In their paper on tennis, Fillmore and Hall briefly reference the impact of photography on modern painting as another example of how technology disrupts in unpredictable ways. After J.M.W. Turner saw a daguerreotype for the first time, he declared: “This is the end of art. I am glad I have had my day.” Turner saw the photograph as a dire competitive threat, since it meant everyone could engage in the business of representation; verisimilitude had become a cheap technology. But Turner was wrong—the photograph didn’t kill art, it just forced painters to seek out new cultural markets. Instead of aiming for realism, they began experimenting with abstraction, focusing less on the depiction of objective reality and more on its subjective qualities. The camera opened up space for Cezanne.

Or consider basketball, a game that has been transformed by the “innovation” of the 3-point shot. Although it might seem obvious that three-pointers would increase the value of guards—they’re the ones most likely to take the shot—Gannaway et al. showed that the players who have gained the most value in the NBA labor market are tall centers who play close to the basket. 

What explains this counterintuitive result? Defensive strategy. When players move out to contest three-pointers, they leave more space for the interior game. Furthermore, the long ball can be taught, as evidenced by the NBA’s steadily rising three-point field goal percentage. (In 1983, teams made 23.8 percent of their three-pointers; in the 2017-2018 season, they made 36.2 percent.) What can’t be taught is height.

Or look at the effects of the great disruption of our time: computers. Last year, I wrote about the effects of digital technology on human capital. Interestingly, the widespread adoption of software seems to be increasing the value of social skills. At first glance, this makes little sense. Shouldn’t the computer age reward those minds best at computation? However, the researchers found that many cognitive skills are easily replaced by cheap machines. As a result, employers increasingly seek out humans who are adept at the very things software can’t do, such as manage other people and settle interpersonal issues.

We live in a time of relentless technological change. These case studies help explain why such change is so unsettling. Our inventions aren’t just altering the competitive landscape—they are doing so in completely unpredictable ways. Those older tennis players probably thought the oversize racket would help them compete with younger players, compensating for their slight decline in athleticism and speed. They were wrong.

And the cascade of unexpected consequences never stops; when it comes to the impact of technology, change is the only constant. The rise of abstract expressionism created a resurgent demand for realist portraiture; in the current NBA game, the most valuable players aren’t just big men—they’re big men who can also hit three pointers. (There’s a reason Boogie Cousins averaged more than six three-point shots per game last year.)  

Tennis is no exception. For nearly twenty-five years, the percentage of matches involving older players  remained below the levels of the wooden racket age; composites had turned the sport into a young person’s game. But then, starting around 2000, the share of matches involving players older than 30 began to steadily increase. By 2014, these veterans played in a higher percentage of matches than ever before.

What's more, these older players are dominating the sport. As I write this, the top ranked player is 31-year-old Rafael Nadal. He’s closely followed by 36-year old Roger Federer, the greatest player of all time and recent Australian Open champion.* 

Both men use the latest composite rackets. Their shots are loaded with topspin. 

Fillmore, Ian, and Jonathan D. Hall. "Technological Change and Obsolete Skills: Evidence from Men’s Professional Tennis." (2018).

*Serena Williams continues to dominate the women’s tour at the age of 36.

How Your Smartphone Camera Is Affecting Your Life

Like many people, I despise my smartphone. I mean, it’s an astonishing piece of technology – a slab of touch sensitive glass connected to the universe—but I resent its chronic temptations, the way it sucks me in with emails, texts, videos, sports scores and other effluvia from the ether. Instead of paying attention to my actual experience—even when it’s the experience of waiting in line or eating lunch—I find myself staring into these lit pixels, thumbing the screen for more.

I realize my complaints are a cliché; it’s easy to hate on these splendid gadgets. Nevertheless, the fact that our daily experience is increasingly mediated by technology does seem to be at the root of many of our 21st century anxieties. Our social networks have been hijacked by Facebook; our public discourse is now dictated by algorithms chasing eyeballs; we can’t order a latte without taking a picture.

It’s that last mediation I’ve been thinking about lately. One reason is that our ceaseless picture-taking seems to be one of the more unexpected side-effects of the smartphone. When you re-watch Steve Jobs’ 2007 introduction of the iPhone, much of it feels familiar. Jobs is proud of a device that can make calls, text and play Pirates of the Caribbean. But what’s more surprising is the way the camera feels like an afterthought during the keynote: Jobs dismisses the tool in a few short sentences. (He honestly seems more interested in conference calling and Yahoo Mail support.) And yet, the built-in camera is now the star attraction of the latest smartphones. If your battery still holds juice, then the best reason to buy a new device is to take better pictures.

Because people take a lot of pictures. According to the Times, roughly 1.3 trillion photos were taken in 2017, the vast majority of them with our smartphones. 350 million of these pictures are uploaded to Facebook every day. Since Instagram launched in 2010, over 40 billion photos have been shared on the site.

That’s a long windup to the practical questions in this post. How are all these pictures affecting our experience? Is the dad at the park taking snapshots of his kids having less fun? When someone records a song at a concert is she missing the music? What is lost when everything can be captured?

A recent paper in JPSP by the psychologists Kristin Diehl, Gal Zuberman and Alixandra Barasch provides some useful answers. The scientists began by taking over a Philadelphia tour bus company for the day. Half of the tourists were assigned to the photo condition: they were given a digital camera and told to take at least ten pictures of their experience. The other half were told to “experience the tour as you normally would.” Both groups were asked to leave their belongings, including smartphones, with a research assistant.

After the bus tour was over, the tourists were given a short survey. Here’s the key takeaway: those people who took lots of pictures enjoyed themselves significantly more. The camera didn’t get in the way—it improved the experience.

The root cause of this improvement was investigated in a second field study done at the Reading Terminal Market, a public food hall in Philadelphia. One hundred and forty-nine diners participated in the study; half were assigned to the photo condition and asked to take at least three pictures of their “eating experience.” The other half were left alone with their meals.

Once again, those who took pictures showed higher levels of enjoyment—photography made their food taste better. Interestingly, these people also showed significantly higher levels of engagement. Because they were immersed in the act of picture-making, they were more attentive to the details of the scene; the mundane experience became an aesthetic event. “What we find is you actually look at the world slightly differently, because you’re looking for things you want to capture, that you may want to hang onto,” Diehl said in a recent interview. “That gets people more engaged in the experience, and they tend to enjoy it more.”

To confirm these findings, Diehl et al. conducted six additional experiments under more controlled conditions. In one study, they used eye-tracking equipment in an archaeology museum—those in the photo condition spent more time looking at the artifacts, which led them to enjoy the museum more. They also showed that, while picture-taking can improve pleasurable experiences, it can also make negatives ones even worse. Because we’re more engaged, the unpleasantness tends to linger; it’s harder to forget what’s still in the cloud.

On the one hand, there’s something rather surprising about these findings. Picture taking, after all, is a form of multi-tasking: we’re dividing our attention between the experience and a technology. In nearly every other instance, such multi-tasking reduces engagement, which is why you shouldn't text while driving. But photography is the exception that proves the rule: for once, the gadget makes us more aware of the world beyond. The mediated experience is intensified.

There are still many good reasons to resent these computers in our pockets. The camera just isn’t one of them. We might joke about all those millennials taking selfies and snapshots, but it turns out they’re maximizing one of the best technologies of the digital age. We’re all searching for ways to be more mindful and present, less distracted by the noise all around. Who knew photography could help? 

Diehl, Kristin, Gal Zauberman, and Alixandra Barasch. "How taking photos increases enjoyment of experiences." Journal of Personality and Social Psychology 111.2 (2016): 119.            

How A Tired Mind Limits the Body

When it comes to our self-understanding, we have been held back by an extraordinary philosophical mistake. It’s a forgivable error, since it reflects our most basic intuitions. The mistake I’m talking about is dualism, which holds that the mind and body are fundamentally separate things.

To borrow the famous framework of Rene Descartes, the human mind is a “thinking thing,” composed of an immaterial substance. (Our thoughts are airy nothings, etc.) The body, in contrast, is a “thing that exists,” just a mortal machine that bleeds. For Descartes, dualism was a defining feature of humanity. Every animal has a body. Only we have a mind.

The dualist faith continues to shape our lives. Like Descartes, we tend to assume that mental events have mental causes—you are sad because your brain is sad—and that physical events have physical causes. (If your back is in pain, there’s something wrong with your back.) Dualism is why we treat depression with pills (rather than exercise, which is often just as effective) and undergo so many spinal surgeries (which are often ineffective.)

Dualism seems obviously true. But it’s mostly false. In recent years, modern neuroscience has demolished these old Cartesian distinctions. It has done this mostly by showing how the body is not a mere power plant to the brain, but rather shapes every aspect of conscious experience. The bacteria in your intestines, for instance, seem to influence your mood, while that feeling of fear probably began as a slightly elevated heart rate. Our memory is improved when it’s connected to physical movement and the sweat glands in your palm can anticipate your gambling mistakes long before the cortex catches up. As the neuroscientist Antonio Damasio has written, “The body contributes more than life support. It contributes a content that is part and parcel of the workings of the normal mind.”

These studies are convincing. And yet, even if one acknowledges the subtle powers of the body— the soul is surprisingly carnal—there is still one realm in which dualism is taken for granted: athletic performance. When we look at our best athletes, we appreciate them as physical specimens, blessed with better flesh than the rest of us. They must have bigger hearts and more fast-twitch muscle fibers; highly efficient lungs and lower resting pulses. We ignore their “thinking thing” and focus instead on their body, “the thing that exists.”

But even here the body/mind distinction proves illusory. Consider a new paper by Daniel Longman, Jay Stock and Jonathan Wells. Their subjects were sixty-two male rowers from the University of Cambridge. They were all in excellent shape. On their first visit to the lab, the men rowed as intensely as possible for three minutes as the scientists tracked their total power output. On their second visit, the men were given an arduous mental task. Seventy-five words were briefly flashed on a screen; their job was to remember as many of them as possible.

The last visit to the lab combined these two measures. While the men worked up a sweat on the rowing machine, they were simultaneously shown a new set of words and asked to remember them. As expected, combining the tasks led to a dropoff in performance: the men remembered fewer words and generated less power on the rowing machine.

But here’s the interesting part: the decline was asymmetric, with physical performance suffering a dropoff that was roughly 25 percent greater than mental performance.

What accounts for this asymmetry? The scientists suggest that it’s rooted in the scarcity of blood sugar and oxygen, as the brain and body compete for the same finite resources. And since we are creatures of cogito—thinking is our competitive advantage—it only makes sense that we’d privilege the cortex over our quadriceps.

The larger lesson is that our thoughts and body are not separate systems—they are deeply intertwined, engaged in a constant dialectic. Those rowers didn’t perform worse because their muscles were run down. Rather, they had less physical power because their selfish brain decided to feed itself first. This means that the best athletes don’t just have better bodies – they also have minds that don’t hold them back.      

Such research adds to the evidence for the so-called Central Governor theory of physical endurance. (I wrote about this recently in Men’s Health.) Most closely associated with Timothy Noakes, now an emeritus professor at the University of Cape Town, the Central Governor theory argues that the feeling of bodily fatigue is primarily caused by the brain, and not the body. As Noakes points out, in the final stages of a race, up to 65 percent of muscle fibers in the leg remain inactive. In addition, levels of ATP—the molecule used to transport energy within our cells—almost never fall below 60 percent of their resting value. This suggests that we still have plenty of energy left, even when the body feels exhausted. The Central Governor is just too scared to use it.

It’s a simple idea with radical implications. After all, we’ve assumed for nearly a century that our physical limits were largely reducible to the laws of muscular chemistry. (In the 1920s, the British physiologist and Nobel laureate Archibald Hill began writing about the effect of “oxygen debt” and the accumulation of lactic acid during intense exercise.) Noakes, however, argues that the reality is far more complicated, and that our sense of fatigue is a subjective mental construct, based on countless variables, from the temperature of the skin to the cheers of the crowd. “I am not saying that what takes place in the muscles is irrelevant,” Noakes writes in his autobiography, Challenging Beliefs. “What I am saying is that what takes place physiologically in the muscles in not what causes fatigue.”

And this brings us back to dualism. After all, unless you admit the enormous mental component of physical performance then won’t be able to train effectively. You’ll be focused on VO2 max and lactate concentrations—highly imperfect measures at best—when you should be building up the threshold of your Central Governor.

So how does one train the Central Governor? In my Men’s Health piece, I profiled Holden Macrae, professor of Sports Medicine at Pepperdine. As part of the Red Bull High Performance research project, he gave endurance athletes a tedious mental chore for 30 minutes. Once their brain was sufficiently run down, Macrae then had them perform a difficult cycling workout. “We found that the power output of the mentally pre-fatigued athletes was way lower than the non-fatigued,” he told me. “It didn’t matter that their bodies were fresh. Their brains were tired, and that shaped their performance.”

Macrae argues that these findings have practical implications for training. If elite athletes are looking to push the boundaries of their endurance, then they should begin their physical training after a brain workout. “Because you are stressing the mind and the body at the same time, you are forcing yourself to write a new software program,” he says. “It’s the same logic as high-altitude training, only you don’t have to go anywhere. You just have to do something boring first.”

The appeal of dualism is inseparable from the fact that it feels true; the body and mind seem like such separate entities. But one of the profound potentials of modern neuroscience is the way it can falsify our longstanding assumptions about human nature. You are not your brain, and your body is not just a body; the soul and the flesh have a very porous relationship. Once we understand that, we can find ways to get more out of both.

Or at least get in a better workout.

Longman, Daniel, Jay T. Stock, and Jonathan CK Wells. "A trade-off between cognitive and physical performance, with relative preservation of brain function." Scientific Reports 7.1 (2017): 13709.

Does Anyone Ever Change Their Mind?

Democracy is expensive. During the 2016 general election, candidates spent nearly $7 billion dollars on their campaigns. (More than $2.5 billion was spent just on the presidential contest.) This money paid for attack ads on television and direct marketing in the mail; it went to voter outreach in swing districts, fancy consultants and targeted ads on Facebook. The goal of all this spending was simple: to persuade more Americans to vote for them.

Did it work? Were those billions well spent? According to a new paper by Joshua Kalla of UC-Berkeley and David Broockman at Stanford, the overwhelming majority of campaign activity failed to persuade voters. As they bluntly state, “We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero.” Not close to zero. Not even one or two percent. Zero.

Kalla and Broockman come to this shocking conclusion by conducting the first meta-analysis of campaign outreach and advertising. Based on a review of forty field experiments, they found that the average effect of all these professional interventions was negligible. (Or, to be exact, - 0.02 percentage points.) While two of the forty studies did find a significant shift in voter behavior, Kalla and Broockman rightly note that these studies looked at interventions with limited applicability. (In one case, the candidate himself knocked on doors, while the other intervention relied on an onerous survey that most voters would never answer.)

However, Kalla and Broockman weren’t content to re-analyze the null results of the past. Given the “dearth of statistically precise studies” the political scientists decided to conduct nine of their own field experiments.  They teamed up with Working America, the community organizing affiliate of the AFL-CIO, to study the impact of canvassing in a variety of different campaigns.

The good news, at least for the political industrial complex, is that Working America had an impact during primaries and special elections. Take the Democratic primary for the mayor of Philadelphia. Kalla and Broockman estimate that a Working America canvass conducted six weeks before election day boosted support for their endorsed candidate by approximately 11 percentage points. A similar effect was observed during a special election for a seat in the Washington State Legislature.

However, the effect size shrank to zero when Kalla and Broockman looked at attempts to influence voters during the general election. When Working America tried to persuade people in Ohio, North Carolina, Florida and Missouri to vote for their candidates for the U.S. Senate, Governor and President, the scientists consistently found no impact from the interventions. As they write, “we conclude that, on average, personal contact—such as door-to-door canvassing or phone calls—conducted within two months of a general election has no substantive effect on vote choice.”

This doesn’t mean campaigns are irrelevant. Candidates can still shape voters’ preferences by changing their policy positions and influencing the media narrative. However, Kalla and Broockman do present solid evidence that most of the stuff campaigns spend their billions on is essentially worthless, at least in the general.

Why are voters so hard to persuade? One likely cause is our hyper-partisan age, which has been exacerbated by online filter bubbles. (Republican Facebook is very different from Democratic Facebook.) As Kalla and Broockman write, “When it comes to providing voters with new arguments, frames, and information, by the time election day arrives, voters are likely to have already absorbed all the arguments and information they care to retain from the media and other sources.”

The key caveat in that sentence is “care to retain.” While voters are inundated with information about the election, they are depressingly good at ignoring dissonant facts, or those arguments that might rattle their partisan opinions. (Roughly half of Trump voters, for instance, think that he won the popular vote and that President Obama was born in Kenya.) The end result is that partisanship dominates persuasion; the vast majority of voters vote for their side, with little consideration of candidate or policy details. In a primary election, those partisan cues are less obvious, which means voters are more open-minded about the actual candidates. Persuasion stands a chance.

President Trump's success depends on these trends. His rhetoric and norm violations are consistently directed at a highly specific (and very conservative) slice of the electorate. This approach might be toxic for the body politic, but it does reflect a certain realism about the limits of persuasion. After all, if the other side can’t be reached, then moderation is for chumps. Modern politics isn’t the art of compromise – it’s the act of targeted arousal. (And Facebook makes such targeting extremely easy.)

President Trump's key insight was that all those norm violations would exact a minimal price at the ballot box. When it was time to vote in the general, he knew that partisanship would dominate, and that even those offended Republicans would hold their noses and vote for their guy.

Is there a solution? Not really. I am, however, slightly encouraged by recent research on human ignorance. In a classic study conducted on Yale undergraduates, the psychologists Leonid Rozenblit and Frank Keil asked people to rate how well they understood the objects they used every day, such as toilets, car speedometers and zippers. Then, the students were asked to write detailed descriptions of how these objects worked, before reassessing their understanding.

The quick exercise revealed that most people dramatically overestimate their understanding. We think we know how toilets work because we flush them several times a day, but almost nobody could explain the ingenious siphoning action used to purge the bowl. As Rozenblit and Keil write, “Most people feel they understand the world with far greater detail, coherence and depth than they really do.” They called this mistake the illusion of explanatory depth.

This same illusion is ruining our politics. In a 2013 study, a team of psychologists led by Philip Fernbach found that the illusion of explanatory depth led people to overestimate their understanding of political issues such as the flat tax, single-payer health care system and Iran sanctions. As with the toilet, it wasn’t until people tried to explain their knowledge, along with the impact of their chosen policies, that they realized how little they actually knew. Interestingly, acknowledging the unknown also led them to moderate their political opinions. As Steven Sloman and Philip Fernbach write in their book The Knowledge Illusion, “The attempt to explain caused their positions to converge.”

The practical lesson is that political persuasion isn’t just about slick videos and clever framing. In fact, most of that stuff doesn’t seem to work at all. (Charisma is no match for cognitive dissonance.) Rather, to the extent persuasion seems possible, it seems to be conditional on voters recognizing their own lack of knowledge, or at least grappling with the complexity of the issues. Sloman and Fernbach put it well: “A good leader must be able to help people realize their ignorance without making them feel stupid.”

There is a smidgen of hope in this research. If Trump represents the triumph of hyper-partisanship—he’s most interested in reaffirming the beliefs of his base—these findings suggest that candidates might also persuade voters by emphasizing the hard questions, and not just their partisan answers. At the very least, such rhetoric makes moderation more appealing.

This was an underappreciated part of the Obama playbook. While the former professor was often criticized for his long-winded and nuanced responses, that nuance might have been more persuasive to voters than another set of rehearsed talking points. As President Obama once observed, when asked about the challenges of the Presidency: “These are big, tough, complicated problems. Somebody noted to me that by the time something reaches my desk, that means it’s really hard. Because if it were easy, somebody else would have made the decision and somebody else would have solved it.”

Obama understood what the science reveals: If you want to change someone else’s mind, yelling out your answers won't work; facts are not convincing. Instead, try beginning with an admission of doubt. (Recent research by David Hagmann and George Loewenstein shows that "expressions of doubt and acknowledgment of opposing views increases persuasiveness," especially in the context of motivated reasoning.) We are most persuasive when we first admit we don't know everything.

Kalla, Joshua L., and David E. Broockman. "The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments." American Political Science Review (2017): 1-19.

The Increasing Value of Social Skills

The progress of technology is best measured by our obsolescence: we have a knack for creating machines that are better than us. From self-driving trucks to software that reads MRIs, many of our current jobs will soon be outsourced to our own ingenious inventions.

And yet, even as the robots take over, it’s clear that there are some skills that are still best suited for human beings. We might be irrational, distractible and bad at math, but we are also empathetic, cunning and creative. Robots are smart. We are social.

It’s easy to dismiss these soft skills. Unlike IQ scores, they are hard to quantify. However, according to a new working paper by Per-Anders Edin, Peter Fredriksson, Martin Nybom and Bjorn Ockert, these squishy interpersonal skills are precisely the sort of talents that are most in demand in the 21st century. 

What’s driving this shift? The answer is technological. Before there were computers in our pockets, the most valuable minds excelled at cognitive stuff: they were adept at abstraction and gifted with numbers. But now? Those talents are easily replaced by cheap gadgets and free software. Computation has become a commodity. As a result, so-called non-cognitive skills—a catch-all category that includes everything from teamwork to self-control—are becoming increasingly valuable.

To prove this point, the researchers took advantage of a unique data set: between 1969 and 1994, nearly every Swedish male underwent a battery of psychological tests as part of the enlistment procedure for the military draft. Their cognitive scores were based on four tests measuring reasoning ability, verbal comprehension, spatial ability and technical understanding. Their non-cognitive skills, in contrast, were assessed during a 20-minute interview with a trained psychologist. During the interview, the draftee was scored on dimensions including “social maturity,” perseverance and emotional stability.

The researchers then matched these cognitive and non-cognitive scores to wage data collected by the Swedish government. Among workers in the private sector, they found that the returns to cognitive skills was relatively flat between 1992 and 2013. This jives with related research from the United States labor market, showing that employment growth in “cognitively demanding occupations” slowed down dramatically in the 21st century. 

However, Edin et al. observed the opposite trend when it came to non-cognitive skills. For these Swedish workers, being good at the interpersonal and emotional was increasingly valuable, with the partial return to non-cognitive skills roughly doubling over the same time-period. It’s not that intelligence doesn’t matter. It’s that emotional intelligence matters more.

Screen Shot 2017-08-25 at 9.18.02 AM.png

According to the economists, one of the reasons non-cognitive skills are becoming more valuable is that they are required for managerial roles.  A good manager doesn’t just issue edicts: he or she must also coordinate workers, placate egos and deal with disagreements. As the economist David Deming noted in his paper, “The Growing Importance of Social Skills in the Labor Market,”  “Such non-routine interaction is at the heart of the human advantage over machines…Reading the minds of others and reacting is an unconscious process, and skill in social settings has evolved in humans over thousands of years.” (Watson might trounce us at chess, but the supercomputer would probably be a terrible boss.) The importance of such non-cognitive skills for management helps explain why the bigger paychecks are going to those with the best social skills.  

Screen Shot 2017-08-25 at 9.06.38 AM.png

This research inevitably leads back to education. The traditional classroom, after all, has been mostly focused on building up cognitive skills. We drill students on arithmetic and pre-algebra; we ask them to memorize answers and follow the rules; the ultimate measure of one’s education is the SAT, a highly cognitive test. Such talents will always be necessary: even in the age of robots, it’s nice to know your multiplication tables.

However, it’s becoming increasingly clear that our classrooms are preparing students for a workforce that no longer exists. They are being taught the most replaceable skills, drilled on the tasks that computers already perform. (It’s a bit like teaching parchment preparation after Gutenberg.) This trend is only getting worse in the age of standardized tests, which focus classroom time on material that can be easily measured by multiple choice questions. Unfortunately, that’s often the very kind of education that technology has rendered obsolete. If you need to memorize it, then chances are a computer can do it better.

The obvious alternative is for classrooms to follow the money, at least when it comes to the wage returns on non-cognitive skills. We should invest in classrooms that teach students how to work together and handle their feelings, even if such soft skills are harder to assess. What’s more, there’s reason to believe that many of these socio-emotional skills are learned relatively early in life, suggesting that we need to invest in effective pre-school and kindergarten curriculums. (Interventions targeting at-risk parents have also proven effective.) While these enhanced socio-emotional abilities might not translate to improved academic performance, there’s evidence that they remain linked to adult outcomes such as employment, earnings and mental health.

The modern metaphor of the human mind is that it’s a biological computer, three pounds of meaty microchips. But it turns out that the real value of the mind in the 21st century depends on all the ways it’s not like a computer at all. It’s not about how much information we can process, because there’s always a machine that can process more. It’s about how we handle those feelings that only we can feel.

The future belongs to those who play well with others.

Hat tip: Marginal Revolution

Does Divorce Increase the Risk of the Common Cold, Even Decades Later?

On November 30, 1939, 450,000 Soviet troops stormed across the Finnish border, setting off nearly five years of brutal conflict. The cities of Finland were strafed by bombers; severe food rationing was put into effect; roughly 2.5 percent of the population was killed. To protect Finnish children from the war, about 70,000 of them were evacuated to temporary foster homes in Sweden and Denmark.

At first glance, it seems like evacuating children from a war zone is the responsible choice. Nevertheless, multiple studies have found that those Finnish children who were sent away have had to deal with the more severe long-term consequences. They might have avoided the acute stress of war, but they had to cope with the chronic stress of separation. A 2009 study found that Finnish adults who were separated from their parents between 1939 and 1944 showed an 86 percent increase in deaths due to cardiovascular illness compared to those who had stayed at home. Although more than sixty years had passed since the war, these temporary orphans were also significantly more likely to have high blood pressure and type 2 diabetes. Other studies have documented elevated levels of stress hormone and increased risk of severe depressive symptoms among the wartime evacuees.

What explains these tragic correlations? The Finnish studies build on decades of research showing that disruptions to our early attachment relationships—such as separating young children from their parents during wartime—can have a permanent impact on our health.

The latest evidence in support of early attachment and adult medical outcomes comes from a new paper in PNAS by the scientists Michael Murphy, Sheldon Cohen, Denise Janicki-Deverts and William Doyle. But these researchers didn’t look at wartime evacuations – they looked at divorce. While parental divorce during childhood has been statistically linked to an increased risk for various physical ailments, from asthma to cancer, these studies have tended to rely on self-reports. As a result, it’s been difficult to determine the underlying cause of the correlations.

To explore this practical mystery, Murphy et al. came up with a clever experimental design. They quarantined 201 healthy adults and gave them nasal drops containing rhinovirus 39, a virus that causes the common cold. They carefully monitored the health of the subjects over the next five days, tracking their symptoms, weighing their mucus, and collecting various markers of immune response and inflammation.

The first thing the scientists found is that not all divorces are created equal. This accords with a growing body of evidence showing that the quality of the parents’ relationship with each other after separation may be more important in predicting the adjustment of their children than the separation itself. This led the scientists to ask their subjects whose parents lived apart if their parents spoke to each other after the separation. As the scientists note, “having parents who are separated and not on speaking terms suggests high levels of acrimony in the childhood family environment.” Such conflict can be extremely stressful for children.

How did these bitter separations during childhood impact the response of the adult subjects to a cold virus? The results were clear. Those adults whose parents lived apart and never spoke during their childhood were more than three times as likely to develop a cold than adults from intact families or those whose parents separated but were still on speaking terms. What’s more, the differences persisted even after the scientists corrected for a raft of possible confounding variables, such as demographics, childhood SES, body mass index, etc. 

PLANS = Parents that Lived Apart and Never Spoke

PLANS = Parents that Lived Apart and Never Spoke

There are two possible explanations for this increased risk. The first is that a bitter divorce weakens the immune system, making those subjects more vulnerable to the rhinovirus, even decades later. The second possibility is that divorce heightens the inflammatory response post-infection, thus triggering the annoying symptoms (mucus, sore throat, mild fever, etc.) associated with the common cold.

The evidence strongly favors the second explanation. For one thing, there was no statistical relationship between divorce and viral infection: everyone was equally likely to show antibodies to the virus in their blood. However, there was a large difference in how the body responded to the infection, with those from non-speaking divorced households being much more likely to exhibit symptoms of illness. (This increased risk was mediated by measurements of inflammatory cytokines.) Although it’s still unclear why an ugly divorce might alter our response to viral infection, one intriguing hypothesis is that the chronic stress of having parents who never stop fighting can cause immune cells to become desensitized to the very hormones that help suppress the inflammation response. In other words, an ugly parental separation can mark our stress response for life, an invisible wound we never get over.

This research is an important reminder of attachment’s long reach: even the most basic aspects of our physical health, like resistance to the common cold, are shaped by emotional events that happened decades before. But it’s also a demonstration that not every rupture of attachment leaves lasting scars; different kinds of divorce can have a very different impact on children. According to this data, the key element might be finding some way to constructively communicate with our former spouse, at least in matters relating to childcare. While intervention studies are needed to directly test this possibility, it seems likely that a little civility can help buffer the fallout of living apart.

Murphy, Michael LM, et al. "Offspring of parents who were separated and not speaking to one another have reduced resistance to the common cold as adults." Proceedings of the National Academy of Sciences (2017)

Can Love Help You Forget Painful Memories?

 "Perfect love casts out fear." 

-Gospel of John 4:18

It's now a firmly established fact that loving attachments are an important component of good health. According to dozens of epidemiological studies, people in long-term relationships are significantly less likely to suffer from cancer, viral infections, mental illness, pneumonia, and dementia. They have fewer surgeries, car accidents, and heart attacks. Their wounds heal faster and they have a lower risk of auto-immune diseases. 

Consider the results from the Harvard Study of Adult Development, which has been tracking 268 Harvard men since the late 1930s. While the study set out to identify the medical measurements that could predict health outcomes—they tracked everything from the circumference of the chest to the hanging length of the scrotum—none of the data proved useful. Instead, what George Vaillant and other scientists discovered after tracking the men for nearly seven decades is that "the capacity for love turns out to be a great predictor of mortality." For instance, those men in the "loveless" category—they had the fewest attachments—were three times more likely to be diagnosed with a mental illness and five times more likely to be "unusually anxious." The loneliest men were also ten times more likely to suffer from a chronic illness before the age of fifty-two and three times more likely to become heavy users of alcohol and tranquilizers. 

Those are just a few of the tragic correlations. (I wrote more about them in A Book About Love, which is soon out in paperback.) Nevertheless, the causal mechanics of these health benefits are unclear. How, exactly, do close relationships prevent such a wide variety of serious illnesses, from alcoholism to heart disease? Why does love keep us alive?

Those practical mysteries are the subject of a new paper in PLOS One by Erica Hornstein and Naomi Eisenberger, psychologists at UCLA. The scientists began by asking subjects to identify “the individual who gives you the most support on a daily basis.” They were told that these individuals could come from any relationship: parent, friend, romantic partner, etc.  The subjects were then asked to provide a picture of this supportive figure.

The experiment itself was a classic fear learning paradigm. First, the scientists had to calibrate the proper amount of electric shock for each subject: they wanted the experience to be “extremely uncomfortable, but not painful.”  (The shock is what triggers the fear.) During the acquisition phase, the subjects were shown various neutral images, such as different clocks and stools. One of these images was paired with a picture of their social support figure while the other was paired with a stranger matched for gender, age and ethnicity. Finally, these images and faces were matched to that “extremely uncomfortable” electric shock. This pairing process was repeated six times.

It might seem unlikely that the mere picture of a loved one could keep us from remembering a fearful association. Nevertheless, when Hornstein and Eisenberger measured fear responses using the skin conductance response of the hand—when you’re scared or anxious, the hands begin to sweat—they found a dramatic difference between the images paired with support figures and strangers.

Screen Shot 2017-06-26 at 8.55.17 PM.png

What accounts for this striking difference? According to the scientists, the most plausible explanation is that the picture of a loved one can “inhibit the formation of fear associations,” preventing us from remembering those scary stimuli in the first place. This builds on related work by Erica Hornstein, Michael Fanselow and Naomi Eisenberger showing that pictures of social support figures can also enhance fear extinction, so that subjects are less likely to react to images that had previously been paired with an electric shock. In short, thinking of a loved one can serve as a useful form of amnesia, at least when it comes to fear memories.

This is pure speculation, but I wonder if studies like this might be used to develop new therapeutic tools. So much of therapy is about learning how to retell our personal history in less painful ways, reducing those triggers that send us into paroxysms of fear, anxiety and despair. One obvious approach would be make sure our retellings of negative events are somehow told in conjunction with our support figures. Maybe it involves having a picture of a partner nearby, or asking questions about the trauma that frame the event in terms of how our attachment figures helped us through. When it comes to buffering the bad stuff, the best medicine is the people we love. A good relationship is like Xanax without the side-effects.

Hornstein, Erica A., and Naomi I. Eisenberger. "Unpacking the buffering effect of social support figures: Social support attenuates fear acquisition." PloS one 12.5 (2017): e0175891.

Is Facebook Bad for Democracy?

We are living in an era of extreme partisanship. As documented by the Pew Research Center, majorities of people in both parties now express “very unfavorable” views of the other side, with most concluding that the policies of the opposition “are so misguided that they threaten the nation’s well-being.” 79 percent of Republicans approve of Trump’s performance as president, while 79 percent of Democrats disapprove. In many respects, party affiliation has become the lens through which we see the world; even the Super Bowl can’t escape the stink of politics.

There are two ways of understanding these divisions.

The first is look at the historical parallels. Partisanship, after all, is as American as apple pie and SUVs. George Washington, in his farewell address, warned that the rise of political parties might lead to a form of “alternate domination,” as the parties would gradually “incline the minds of men to seek security... in the absolute power of an individual.” In the election of 1800, his prophesy almost came true, as several states were preparing to summon their militias if Jefferson lost. Our democracy has always been a contact sport.

But there’s another way of explaining the political splintering of the 21st century. Instead of seeing our current divide as a continuation of old historical trends, this version focuses on the impact of new social media. Donald Trump is not the latest face of our factional republic—he’s the first political figure to fully take advantage of these new information technologies.

Needless to say, this second hypothesis is far more depressing. We know our democracy can handle partisan passions. It’s less clear it can survive Facebook.

Why might technology be cratering our public discourse? To answer this question, a new paper in PLOS ONE by a team of Italian researchers at the IMT School for Advanced Studies Lucca and Brian Uzzi at Northwestern looked at 12 million users of Facebook and YouTube. They began by identifying 413 different Facebook pages that could be sorted into one of two categories: Conspiracy or Science. Conspiracy pages were those that featured, in the delicate wording of the scientists, “alternative information sources and myth narratives—pages which disseminate controversial information, usually lacking supporting evidence and most often contradictory of the official news.” (Examples include Infowars, the Fluoride Action Network and the ironically named I Fucking Love Truth.) Science pages, meanwhile, were defined as those having “the main mission of diffusing scientific knowledge.” (Examples include Nature, Astronomy Magazine and Eureka Alerts.)

The researchers then looked at how users interacted with videos appearing on these sites on both Facebook and YouTube. They looked at comments, shares and likes between January 2010 and December 2014. As you can probably guess, many users began the study only watching videos from either the Conspiracy or Science categories. (These people are analogous to voters with entrenched party affiliations.) The researchers, however, were most interested in those users who interacted with both categories; these folks liked Neil deGrasse Tyson and Alex Jones. Think of them as analogous to registered Democrats who voted for Trump, or Republicans who might vote for a Democratic congressperson in the 2018 midterms.

Here’s where things get unsettling. After just fifty interactions on YouTube and Facebook, most of these “independents” started watching videos exclusively from one side. Their diversity of opinions gave way to uniformity, their quirkiness subsumed by polarization. The filter bubble won. And it won fast.

Why does the online world encourage polarization? The scientists focus on two frequently cited forces. The most powerful force is confirmation bias, that tendency to seek out information that confirms our pre-existing beliefs. It’s much more fun to learn about why we’re right (Fluoride = cancer) than consider the possibility we might be wrong (Fluoride is a safe and easy way to prevent tooth decay). Entire media empires have been built on this depressing insight.

The second force driving online polarization is the echo chamber effect. Most online platforms (such as the Facebook News Feed) are controlled by algorithms designed to give us a steady drip of content we want to see. That’s a benign aspiration, but what it often means in practice is that the software filters out dissent and dissonance. If you liked an Infowars video about the evils of vaccines, then Facebook thinks you might also like their videos about fluoride. (This helps explain why previous research has found that more active Facebook users tend to get their information from a smaller number of news sources.) “Inside an echo chamber, the thing that makes people’s thinking evolve is the even more extreme point of view,” Uzzi said in a recent interview with Anne Ford. “So you become even more left-wing or even more right-wing.” The end result is an ironic affliction: we are more certain than ever, but we understand less about the world.

This finding jives nicely with another new paper that directly tested the impact of filtered newsfeeds. In a clever lab experiment, Ivan Dylko and colleagues showed that feeds similar to those on Facebook led people to spend far less time reading articles that contradicted their  political beliefs. Dylko et al. end on a somber note: “Taken together, these findings show that customizability technology can undermine important foundations of deliberative democracy. If this technology becomes even more popular, we can expect these detrimental effects to increase.”

The obvious solution to these problems is to engage in more debunking. If people are seeking out fake news and false conspiracies, then we should confront them with real facts. (This is what Facebook is trying to do, as they now include links to debunked articles in News Feeds.) Alas, the evidence suggests that this strategy might backfire. A previous paper by several of the Italian scientists found that Facebook users prone to conspiracy thinking react to contradictory information by “increasing their engagement within the conspiracy echo chamber.” In other words, when people are told they’re wrong, they don’t revise their beliefs. They just work harder to prove themselves right. It’s cognitive dissonance all the way down.

It was only a few generations ago that most Americans got their news from a few old white men on television. We could choose between Walter Cronkite (CBS), John Chancellor (NBC) and Harry Reasoner (ABC). It was easy to assume that Americans wanted this shared public discourse, or at least a fact-checked voice of authority, which is why nearly 30 million people watched Cronkite every night.* But now it’s clear that we only watched these shows because we had no choice—their appeal depended on the monopoly of network television. Once this monopoly disappeared, and technology gave us the ability to curate our own news, we flocked to what we really wanted: a platform catering to our biases and beliefs.

Tell me I’m right, but call it the truth.

Bessi, Alessandro, Fabiana Zollo, Michela Del Vicario, Michelangelo Puliga, Antonio Scala, Guido Caldarelli, Brian Uzzi, and Walter Quattrociocchi. "Users Polarization on Facebook and Youtube." PLOS ONE 11, no. 8 (2016): e0159641.

*The shared public discourse reduced political partisanship. In the 1950s, the American Political Association published a report fretting about the lack of ideological distinction between the two parties. The lack of overt partisanship, they said, might be undermining voter participation.

 

Nobody Knows Anything (NFL Draft Edition)

Pity the Cleveland Browns fan. Seemingly every year, the poor performance of the team leads to a high first-round pick: in this year’s draft, the Browns are making the first selection. And every year the team squanders the high pick, either by trading down and missing a superstar (Julio Jones in 2013) or trading up for a pick that didn’t pan out (Johnny Manziel in 2014, Trent Richardson in 2012, Brady Quinn in 2007, et al.) The draft is supposed to be a source of hope, a consolation prize for all the failures of the past. But for the hapless Browns, it has become yet another reminder of their chronic struggles.

This blog is not another critique of a pitiful team. The Browns might have a terrible track record in the draft, but I’m here to tell you that it’s not their fault. And that’s for a simple reason: picking college players is largely a crapshoot, a game of dice played with young athletes. The Browns might not know how to identify the college players with the most potential, but there’s little evidence that anybody else does, either. 

It’s not for lack of trying. Every year, professional football teams invest a huge amount of time and effort into choosing which college players to take with their draft picks. This is for the obvious reason: picks are extremely valuable. (Because the NFL has a strict cap on rookie salaries, new players are significantly underpaid, at least compared to their veteran colleagues.) Given the high stakes involved, it seems reasonable to assume that teams would have developed effective methods of identifying those players most likely to succeed in the pros. 

But they haven’t. That, at least, is the conclusion of a 2013 analysis of the NFL draft by Cade Massey and Richard Thaler. Consider one of their damning pieces of evidence, which involves the likelihood that a given player performs better in the NFL than the next player chosen in the draft at his position. As Massey and Thaler note, this is the practical question that teams continually face in the draft, as they debate the advantages of trading up to acquire a specific athlete.

Unfortunately, there is virtually no evidence that teams know what they’re doing: only 52 percent of picks outperform those players chosen next at the same position. “Across all rounds, all positions, all years, the chance that a player proves to be better than the next-best alternative is only slightly better than a coin flip,” write the economists. Or consider this statistic, which should strike fear into the heart of every NFL general manager: over their first five years in the league, draft picks from the first round have more seasons with zero starts (15.3 percent) than seasons that end with a selection to the Pro Bowl (12.8 percent). While draft order is roughly correlated with talent – players taken early tend to have better professional careers – Massey writes in an email that he “considers differences between team performance in the draft to be, effectively, all chance.” The Browns aren’t stupid, just unlucky.

If teams admitted their ignorance, they could adjust their strategy accordingly. They could discount their scouting analysis and remember that college performance is only weakly correlated with NFL output. They might even explore new player assessment strategies, as the old ones don't seem to work very well. 

Alas, teams routinely act as if they can identify the best players, which is what leads them to trade-up for more valuable picks. But this is precisely the wrong approach. As proof, Massey and Thaler compute a statistic they call “surplus value,” which reflects the worth of a player’s performance (as calculated by the pay scale of NFL veterans) minus his actual compensation. “If picks are valued by the surplus they produce, then the first pick in the first round is the worst pick in the round, not the best,” write the economists. “In paying a steep price to trade up, teams are paying a lot to acquire a pick that is worth less than the ones they are giving up.”

Why are most NFL teams so bad at the draft? The main culprit is what Massey and Thaler refer to as “overconfidence exacerbated by information.” Teams assume their judgments about prospective players are more accurate than they are, especially when they amass large amounts of data and analytics. What they fail to realize is that much of this information isn’t predictive, and that it’s almost certainly framed by the same biases and blind spots that limit our assessments of other people in everyday life. As Massey and Thaler write: “The problem is not that future performance is difficult to predict, but that decision makers do not appreciate how difficult it is.” 

There is something deeply sobering about the limits of draft intelligence among NFL teams. These are athletes, after all, whose performance has been measured by a dizzying array of advanced stats; they have been scouted for years and run through a gauntlet of psychological and physical assessments. (As the economists write, “football teams almost certainly are in a better position to predict performance than most employers choosing workers.”) However, even in this rarified domain, the mystery of human beings still dominates. We live in the age of big data and sabermetrics, which means that it’s harder than ever to know what we don’t. But this paper is an important reminder that such meta-knowledge is essential—when we ignore the error bars, we’re much more likely to make a very big mistake.

Bill Belichick, the coach of the New England Patriots (and former coach of the Cleveland Browns!), has won lots of games by pushing back against the curse of overconfidence. If Belichick has a signature move in the draft, it’s trading down, swapping a high pick for multiple less valuable ones. (Under Belichick, the Patriots have gained more than 25 compensatory draft picks.) If teams could reliably assess talent, this strategy would make little sense, since it would mean giving up on superstars. However, given the near impossibility of predicting elite player performance, gaining more picks is an astute move. Since nobody knows who to choose, the only way to play is to make a lot of bets.

Massey, Cade, and Richard H. Thaler. "The loser's curse: Decision making and market efficiency in the National Football League draft." Management Science 59.7 (2013): 1479-1495.

When Is Ignorance Bliss?

The first line of Aristotle’s Metaphysics begins with a seemingly obvious truth: “All men by nature desire to know.” According to Aristotle, this desire for knowledge is our defining instinct, the quality that sets our mind apart. As the cognitive psychologist George Miller put it, we are informavores, blessed with a boundless appetite for information.

It’s a comforting vision. However, like all dictums about human nature, it also comes with plenty of caveats and exceptions. Take spoiler alerts. It’s hard to read an article about a work of entertainment that doesn’t contain a warning to readers. The assumption of these warnings, of course, is that people don’t want to know, at least when it comes to narratives.

And it’s not just the latest twists in Scandal that we’re trying to avoid. Twenty percent of Malawi adults at risk for HIV decline to get the results of their HIV test, even when offered cash incentives; approximately 10 percent of Canadians with a family history of Huntington Disease choose to not undergo genetic testing. (Even James Watson declined to have his risk of Alzheimer’s revealed.) These are just specific examples of a larger phenomenon. Given the advances in genetic testing and biomarkers, the Aristotelian model would predict that we’d all become subscribers to 23andMe. But that’s not happening.

A new paper in Psychological Review by Gerd Gigerenzer and Rocio Garcia-Retamero explores the motives of our willful ignorance. They begin by establishing its prevalence, surveying more than 2000 German and Spanish adults about various forms of future knowledge. Their results are clear proof that most of us want spoiler alerts for real life: between 85 and 90 percent of subjects say they don’t want to know when or why their partner will die. (They feel the same way about their own death.) They also don’t want to know if their marriage will eventually end in divorce. This preference for ignorance even applies to positive events: between 40 and 70 percent of subjects don't want to know about their future Christmas gifts, or who won the big soccer match, or the gender of their next child.

To understand our reasons for ignorance, Gigerenzer and Garcia-Retamero asked subjects about their risk attitudes. They found that people who are more risk-averse (as measured by their insurance purchases and their choices playing a simple lottery game) are more likely to prefer not knowing. While this might appear counterintuitive—learning how you will die might help reduce the risk of dying— Gigerenzer and Garcia-Retamero explain these results in terms of anticipatory regret. People avoid risks because they don’t want to regret those losing gambles. They avoid life spoilers for a similar reason, as they're trying to avoid regretting the decision to know. 

On the one hand, this intuition has a logical sheen. It’s not that ignorance is bliss—it’s just better than knowing that life can be shitty and full of suffering. Knowing exactly how we’ll suffer might only make it worse. The same principle also applies to the good stuff: we think we'll be less happy if we know about our happiness in advance. Life is like a joke—it's not so funny if we get the punchline first.

But there’s also some compelling evidence that our intuitions about regretting future knowledge are wrong. For one thing, it’s not clear that spoilers spoil anything. Consider a 2011 study by Jonathan Leavitt and Nicholas Christenfeld. The scientists gave several dozen undergraduates twelve different short stories. The stories came in three different flavors: ironic twist stories (such as Chekhov’s “The Bet”), straight up mysteries (“A Chess Problem” by Agatha Christie) and “literary stories” by writers like Updike and Carver. Some subjects read the story as is, without a spoiler. Some read the story with a spoiler carefully embedded in the actual text, as if Chekhov himself had given away the end. And some read the story with a spoiler disclaimer in the preface.

Here’s the shocking twist: the scientists found that almost every single story, regardless of genre, was more pleasurable when prefaced with some sort of spoiler. It doesn’t matter if it’s Harry Potter or Hamlet: an easy way to make a good story even better is to spoil it at the start. As the scientists write, “Erroneous intuitions about the nature of spoilers may persist because individual readers are unable to compare spoiled and unspoiled experiences of a novel story. Other intuitions about suspense may be similarly wrong: Perhaps birthday presents are better when wrapped in cellophane, and engagement rings when not concealed in chocolate mousse.”

In fiction as in life: we assume our pleasure depends on ignorance. However, Leavitt and Christenfeld argue that spoilers enhance narrative pleasure by letting readers pay more attention to developments along the way. Because we know the destination, we’re better able to enjoy the journey. 

There's more to life than how it ends.

Gigerenzer, Gerd, and Rocio Garcia-Retamero. "Cassandra’s regret: The psychology of not wanting to know." Psychological Review 124.2 (2017): 179 

Why College Should Become A Lottery

Barry Schwartz, a psychologist at UC-Berkeley and Swarthmore, does not think much of the college admissions process. In a new paper, he tells a story about a friend who spent an afternoon with a high-school student. His friend was impressed by the student and, for the first time in thirty years of teaching, decided to send a note to the dean of admissions. Despite the note, the student did not get in. Schwartz describes what happened next:

“Curious, my friend asked the dean why. ‘No reason,’ said the dean. ‘No reason?,’ replied my friend, somewhat incredulous. ‘Yes, no reason. I can’t tell you how many applicants we reject for no reason.’”

For Schwartz, such stories are a sign of a broken system. Although colleges pretend to be paragons of meritocracy, their selection methods are rife with randomness. “Despite their very best efforts to make the selection process rational and reasonable, admissions people are, in effect, running a lottery,” Schwartz writes. “To get into Harvard (or Stanford, or Yale, or Swarthmore), you need to be good...and you need to be lucky.”

Schwartz devotes much of his article to the severe negative consequences inflicted by this capricious selection process. He begins by lamenting the ways in which it discourages students from experimenting, both inside and outside the classroom. Because teenagers are so terrified of failure—Harvard requires perfection!—they refuse to take classes that might end with the crushing disappointment of a B+. Over time, this can lead to high-school students that “may look better than ever before” but are probably learning less.  

But wait: it gets worse. Much worse. Suniya Luthar, a professor of psychology at Arizona State University, has spent the last several years documenting the emotional toll of the college competition on upper-middle class children. Although these affluent kids lead enviable lives on paper—they have educated white-collar parents, high test scores and attend elite high-schools—they are roughly twice as likely to suffer from the symptoms of depression and anxiety than the national average. They are also far more likely to have eating disorders and meet the diagnostic criteria for substance abuse.  

There are, of course, countless variables driving this epidemic of mental issues among affluent teenagers. (Maybe it’s Snapchat’s fault? Or a side-effect of helicopter parenting?) However, Luthar argues that one of the main causes is what she calls the “pressure to achieve.” The problem with the pressure is that it’s a double-edged sword. If a student’s achievements fall short, then he feels inadequate. However, even if a student gets straight As, she probably still lives in what Luthar calls “a state of fear of not achieving.” Over time, that chronic sense of fear can lead to anxiety disorders and depression; kids are burned out on stress before they even leave their childhood homes. 

How can we fix this competitive morass? Schwartz offers a provocative solution. (In an email, he observes that he first offered this proposal a decade ago. In the years since, it’s only gotten more necessary.) The first phase of his plan involves filtering applicants using the same academic standards currently in place. Schwartz estimates that these standards—GPA, SAT scores, extracurricular activities, etc.—could cut the applicant pool by up to two-thirds. But here’s the crucial twist: after this initial culling, all of the acceptable students would be entered into an admissions lottery. The winners would be drawn at random.

Such a lottery system, Schwartz writes, would offer multiple advantages over our current fake meritocracy. For one thing, it would be much less stressful for teenagers to strive to be “good enough” rather than the best; high-achieving students wouldn’t have to be the highest achieving. This, in turn, would “free students up to do the things they were really passionate about.” Instead of chasing extrinsic rewards—does Stanford need an oboe player?—adolescents would be free to follow their sense of intrinsic motivation.* By making selective colleges less selective, Schwartz says, they can get happier and more well-rounded students.

The hybrid lottery system would also force colleges to be more transparent about their selection methods. Right now, the admissions process is a black box; such secrecy is what allows colleges to accept legacies and reject otherwise qualified students for no particular reason. However, if the schools were forced to define their lottery cut-off, they would have to reflect on the measurements that actually predict academic success. And this doesn’t mean the criteria must be quantitative. As Schwartz notes, “criteria for ‘good enough’ can be sufficiently flexible that applicants who are athletes, violinists, minorities, or from Alaska get ‘credit’ for these characteristics,” just as in the current system.

The most obvious objection to Schwartz’s lottery system is ethical. For many people, it just seems wrong to base a major life decision on a roll of the dice. But here’s the thing: the college application process is already a crapshoot. (The differences used to differentiate applicants—say, 10 points on the SAT—are often smaller than the amount of error in the assessments.) By making the lottery explicit, students and schools would at least be forced to have a candid conversation about the role of luck in life. Instead of taking full credit for our admission, or blaming ourselves for our rejection, we’d admit that much of success is random chance and pure contingency. Perhaps, Schwartz writes, this might make students a little “more empathic when they encounter people who may be just as deserving as they are, but less lucky.”

Schwartz is best known for his research on the pitfalls of the maximizing decision-making strategy, in which people obsess over finding the best possible alternative. The problem with this approach, Schwartz and colleagues have repeatedly found, is that it ends up making us miserable. Instead of being satisfied with a perfectly acceptable option, we get stressed about finding a better one. And then, once we make a choice, studies show that maximizers end up drenched in regret, fixated on their foregone options. We’re trained to be maximizers by consumer culture—who wants to settle for the second best laundry detergent?—but it’s usually a shortcut to a sad life.

This new paper extends the maximizing critique to higher-education. In Schwartz’s telling, the college application process is a particularly powerful example of how the maximizing approach can lead us astray. Given the inherent uncertainty of matching students and colleges, Schwartz argues that it’s foolish to try to find the ideal school. Rather, we should practice an approach that Herbert Simon called satisficing, in which we search for colleges that are good enough. After all, the evidence suggests we can be equally happy at a multitude of places. 

This, perhaps, is the greatest virtue of the lottery proposal: by making it impossible for students to act like maximizers—chance chooses for them—they will be given a life lesson in the power of satisficing. Instead of wasting their dreams on a dream school, they should follow their adolescent passions and embrace the chanciness of life. You can’t always get exactly what you want. But if you practice satisficing, you just might get what you need.

*The danger of replacing intrinsic motivation with extrinsic rewards was first demonstrated in a classic study of preschoolers. Some of the young children were told they would get a reward for drawing with pens. You might think this would encourage the kids to draw even more. It didn’t. Instead, those toddlers given an “expected reward” were less likely to use the pens in the future. (And when they did use the pens, they spent less time drawing.) The extrinsic rewards, said the scientists, had turned “play into work.”

Schwartz, Barry (2016) “Why Selective Colleges Should Become Less Selective—And Get Better Students,” Capitalism and Society: Vol. 11: Iss. 2.

The Headwinds Paradox (Or Why We All Feel Like Victims)

When you are running into the wind, the air feels like a powerful force. It’s blowing you back, slowing you down, an annoying obstacle making your run that much harder.

And then you turn around and the headwind becomes a tailwind. The air that had been pushing you back is now propelling you forward. But here’s the question: do you still notice it?

Probably not. Simply put, headwinds are far more salient than tailwinds. When it comes to exercise, we fixate on the barrier and ignore the boost.

In a new paper, the psychologists Shai Davidai and Thomas Gilovich show that this same asymmetry is present across many aspects of life, and not just when we’re running on a windy day.

As evidence, Davidai and Gilovich conducted a number of clever studies. In the first experiment, they asked people which political party was advantaged or disadvantaged by the rules of American democracy, such as the electoral college. As expected, partisans on both sides believed their side suffered from the headwinds, so that Democrats were convinced the political system favored Republicans and Republicans believed it favored Democrats. Interestingly, the size of the effect was mediated by the level of political engagement, with more engagement leading to a stronger sense of unfairness. In short, the more you think about American politics the more convinced you are that the system is stacked against you. (In fairness to Democrats, recent history suggests they might be right.)

A similar effect was also observed among football fans, who were much more likely to notice the difficult games on their team’s upcoming schedule than the easy ones. The headwinds/tailwinds asymmetry even shaped the career beliefs of academics, as people in a given sub-discipline believed they faced more hurdles than those in other sub-disciplines.

And then there’s family life, that rich vein of grievance. When the psychologists asked siblings if their parents had been harder on the older or younger child, their answers depended largely on their own position in the family. Older children were convinced that their parents had gone easy on their little siblings, while younger siblings insisted the discipline had been evenly distributed. Mom always loves someone else the most.

According to Davidai and Gilovich, the underlying cause of the headwind effect is the availability heuristic, in which our judgement is distorted by the ease with which relevant examples come to mind. First described by Kahneman and Tversky, the availability heuristic is why people think tornadoes are deadlier than asthma—tornadoes generate headlines, even though asthma takes 20 times more lives—and why spouses tend to overestimate their share of household chores. (We remember that time we took out the garbage; we don’t remember all those times we didn’t.) As Timur Kuran and Cass Sunstein point out, the availability bias might be “the most fundamental heuristic” of them all, constantly distorting our judgements of frequency and probability. We see through a glass, darkly; the availability heuristic is often what makes the glass so dark. 

This new paper shows how the availability bias can even warp our life narratives. We think our memory reflects the truth; it feels like a fair accounting of events. In reality, though, it’s a story tilted towards resentment, since it’s so much easier for us to remember every slight, wound and obstacle.

Why does this matter? Didn’t we already know that our memory is mostly bullshit? Davidai and Gilovich argue that this particular mnemonic flaw comes with serious practical consequences. For one thing, the headwind effect makes it harder for us to experience gratitude, which research shows is associated with higher levels of happiness, fewer hospitalizations and a more generous approach towards others. Because we take the tailwinds of life for granted—the headwinds consume all our attention—we have to work to notice our blessings. We easily remember who hurt us; we soon forget who helped us.

This effect can even shape public policy, limiting our interest in helping the less fortunate. We’re so biased towards our adversities that we can’t empathize with the adversities of others, even when they might be far more challenging. And since we tend to neglect our God given advantages—good parents, silver spoons, etc.—we discount the role they played in our success. The end result is a series of false beliefs about what it takes to succeed.

In a recent interview, Rob Lowe lamented the obstacles that had limited his early career opportunities. Handsome actors like himself, he said, are subject to “an unbelievable bias and prejudice against quote-unquote good-looking people.”

We’re all victims. Even beauty is a headwind.

Davidai, Shai, and Thomas Gilovich. "The headwinds/tailwinds asymmetry: An availability bias in assessments of barriers and blessings." Journal of Personality and Social Psychology 111.6 (2016): 835. 

Fewer Friends, Better Marriages: The Modern American Social Network

In A Book About Love, I wrote about research showing that the social networks of Americans have been shrinking for decades. Miller McPherson, a sociologist at the University of Arizona and Duke University, has helped document the decline. In 1985, 26.1 percent of respondents reported discussing important matters with a “comember of a group,” such as a church congregant. In 2004, McPherson found that the percentage had fallen to 11.8. In 1985, 18.5 percent of subjects had important conversations with their neighbors. That number shrank to 7.9 percent two decades later. Other studies have reached similar conclusions. Robert Putnam, for instance, has used the DDB Needham Life Style Surveys to show that the average married couple entertained friends at home approximately fifteen times per year in the 1970s. By the late 1990s, that number was down to eight, “a decline of 45 percent in barely two decades.”

These surveys raise the obvious question: If we’re no longer socializing with our neighbors, or having dinner parties with our friends, then what the hell are we doing? 

One possibility is screens. Conversation is hard; it’s much easier to chill with Netflix and the cable box. According to this depressing speculation, technology is an enabler of loneliness, allowing us to forget how isolated we’ve become. 

But there’s another possibility. While it seems clear that we’re spending less time with our friends and acquaintances (texting doesn’t count), we might be spending more time with our spouses and children. (McPherson found, for instance, that the percentage of Americans who said their spouse was their “only confidant” nearly doubled between 1985 and 2004.) If true, this would suggest that our social network isn’t fraying so much as it’s gradually becoming more focused and intimate.

A new paper by Katie Genadek, Sarah Flood and Joan Garcia Roman at the University of Minnesota, drawing from time use survey data from 1965 to 2012, aims to answer these important unknowns. Their data provides a fascinating portrait of the social trends shaping the lives of American families.

I’ll start with the punchline: on average, spouses are spending more time with each other than they did in 1965. This trend is particularly visible among married couples with children. Here are the scientists: “In 1965, individuals with children spent about two hours per day with both their spouse and child(ren); by 2012 this had increased 50 minutes to almost three hours.” Instead of bowling with neighbors, we’re taking our kids to soccer practice.

Of course, when it comes to togetherness time, quality matters more than quantity. One cynical explanation for the increase in family time is that much of it might involve screens. Maybe we’re not hanging out—we’re just sharing a wifi network. But the data doesn’t seem to show that. In 1975, couples spent 79 minutes watching television together. In 2012, that number had increased by only 13 minutes. What’s more, spouses are still making time for shared activities that don’t involve TV. Although our total amount of leisure time has remained remarkably constant – Keynes’ leisure society has not come to pass – we are more likely to spend this free time with our spouse.

This is particularly true among couples with children. The big news buried in this time use data is that parents are doing a lot more parenting. In 1965, parents spent 41 minutes engaged in “primary care” for their little ones. That number had more than doubled, to 88 minutes, in 2012. We’re also far more likely to parent together, with the number of minutes spent as a family unit quadrupling from 6 minutes in 1965 to 27 minutes in 2012. This increase in family time comes despite the sharp increase in women working outside the home.

It’s so easy to despair about the state of the world. What’s important to remember, however, is that these more intimate benchmarks of life are trending in the right direction.  Amid all the calls to make America great again, we’re liable to forget that the greatest generations spent a staggeringly little amount of time with their families. The nuclear family is supposed to be disintegrating, but these time diaries show us the opposite, as Americans are choosing to spend an increasing percentage of their time with their partner and children.

What makes this survey data more compelling is that it jives with recent research showing the growing role played by our spouses in determining our own life happiness. In a separate study based on data from 47,000 couples, Genadek and Flood found that individuals are nearly twice as happy when they are with their spouse as when they’re not. Meanwhile, a recent meta-analysis of ninety-three studies by the psychologist Christine Proulx found that the rewards of a good marriage have surged in recent decades, with the most loving couples providing a bigger lift to the “personal well-being” of the partners. In fact, the influence of a good marriage on overall levels of life satisfaction has nearly doubled since the late 1970s. Given this happiness boost, it shouldn’t be too surprising that we’re spending more time with our spouses. If we’re lucky, we already live with the people who make us happiest. 

Genadek, Katie R., Sarah M. Flood and Joan Garcia Roman. “Trends in Spouses’ Shared Time in the United States, 1965-2012.” Demography (2016)  

Why Facebook Rules the World

One day, when historians tell the strange story of the 21st century, this age of software and smartphones, populism and Pokemon, they will focus on a fundamental shift in the way people learn about the world. Within the span of a generation, we went from watching the same news shows on television, and reading the same newspapers in print, to getting a personalized feed of everything that our social network finds interesting, as filtered by a clever algorithm. The main goal of the algorithm is to keep us staring at the screen, increasing the slight odds that we might click on an advertisement.

I’m talking, of course, about Facebook. Given the huge amount of attention Facebook commands—roughly 22 percent of the internet time Americans spend on their mobile devices is spent on the social network—it has generated a relatively meager amount of empirical research. (It didn't help that the company’s last major experiment became a silly controversy.) Furthermore, most of the research that does exist explores the network’s impact on our social lives. In general, these studies find small, mostly positive correlations between Facebook use and a range of social measures: our Facebook friends are not the death of real friendship.

What this research largely overlooks, however, is a far more basic question: why is Facebook so popular? What is it about the social network (and social media in general) that makes it so attractive to human attention? It’s a mystery at the heart of the digital economy, in which fortunes hinge on the allocation of eyeballs.

One of the best answers for the appeal of Facebook comes from a 2013 paper by a team of researchers at UCSD. (First author Laura Mickes, senior authors Christine Harris and Nicholas Christenfeld.) Their paper begins with a paradox: the content of Facebook is often mundane, full of what the scientists refer to as “trivial ephemera.” Here’s a random sampling of my current feed: there’s an endorsement of a new gluten-free pasta, a smattering of child photos, emotional thoughts on politics and a post about a broken slide at the local park. As the scientists point out, these Facebook “microblogs” are full of quickly composed comments and photos, an impulsive record of everyday life.

Such content might not sound very appealing, especially when there is so much highly polished material already competing for our attention. (Why read our crazy uncle on the election when there’s the Times?) And yet, the “microblog” format has proven irresistible: Facebook’s “news” feed is the dominant information platform of our century, with nearly half of Americans using it as a source for news.  This popularity, write the scientists, “suggests that something about such ‘microblogging’ resonates with human nature.”

To make sense of this resonance, the scientists conducted some simple memory experiments. In their first study, they compared the mnemonic power of Facebook posts to sentences from published books. (The Facebook posts were taken from the feeds of five research assistants, while the book sentences were randomly selected from new titles.) The subjects were shown 100 of these stimuli for three seconds each. Then, they were given a recognition test consisting of these stimuli along with another 100 “lures” – similar content they had not seen - and asked to assess their confidence, on a twenty-point scale, as to whether they previously been exposed to a given stimulus.

According to the data, the Facebook posts were much more memorable than the published sentences. (This effect held even after controlling for sentence length and the use of “irregular typography,” such as emoticons.) But this wasn’t because people couldn’t remember the sentences extracted from books – their performance here was on par with other studies of textual memory. Rather, it was largely due to the “remarkable memorability” of the Facebook posts. Their content was trivial. It was also unforgettable.

In a follow-up condition, the scientists replaced the book sentences with photographs of human faces. (They also gathered a new collection of Facebook posts, to make sure their first set wasn’t an anomaly.) Although it’s long been argued that the human brain is “specially designed to process and store facial information,” the scientists found that the Facebook posts were still far easier to remember.

This is not a minor effect: the difference in memory performance between Facebook posts and these other stimuli is roughly equivalent to the difference between people with amnesia due to brain damage and those with a normal memory. What’s more, this effect exists even when the Facebook content is about people we don’t even know. Just imagine how memorable it is when the feed is drawn from our actual friends.

To better understand the mnemonic advantage of microblogs, the scientists ran several additional experiments. In one study, they culled text from CNN.com, drawing from both the news and entertainment sections. The text came in three forms: headlines, sentences from the articles, and reader comments. As you can probably guess, the reader comments were much more likely to be remembered, especially when compared to sentences from the articles. Subjects were also better at remembering content from the entertainment section, at least compared to news content.

Based on this data, the scientists argue that the extreme memorability of Facebook posts is being driven by at least two factors. The first is that people are drawn to “unfiltered, largely unconsidered postings,” whether it’s a Facebook microblog or a blog comment. When it comes to text, we don’t want polish and reflection. We want gut and fervor. We want Trump’s tweets.

The second factor is the personal filter of Facebook, which seems to take advantage of our social nature.  We remember random updates from our news feed for the same reason we remember all the names of the Pitt-Jolie children: we are gossipy creatures, perpetually interested in the lives of others.

This research helps explain the value of Facebook, which is currently the 7th most valuable company in the world. The success of the company, which sells ads against our attention, is ultimately dependent on our willingness to read the haphazard content produced by other people for free. This might seem like a bug, but it’s actually an essential feature of the social network. “These especially memorable Facebook posts,” write the scientists, “may be far closer than professionally crafted sentences to tapping into the basic language capacities of our minds. Perhaps the very sentences that are so effortlessly generated are, for that reason, the same ones that are readily remembered.” While traditional media companies assume people want clean and professional prose, it turns out that we’re compelled to remember the casual and flippant. The problem, of course, is that the Facebook news algorithm is filtered to maximize attention, not truth, which can lead to the spread of sticky lies. When our private feed is full of memorable falsehoods what happens to public discourse?

And it’s not just Facebook: the rise of the smartphone has encouraged a parallel rise in informal messaging. (We've gone from email to emojis in a few short years.) Consider Snapchat, the social network du jour. It's entire business model depends on the eagerness of users to consume raw visual content, produced by friends in the grip of System 1. In a universe overflowing with professional video content, it might seem perverse that we spend so much time watching grainy videos of random events. But this is what we care about. This is what we remember.

The creation of content used to be a professional activity. It used to require moveable type and a printing press and a film crew. But digital technology democratized the tools. And once that happened, once anyone could post anything, we discovered an entirely new form of text and video. We learned that the most powerful publishing platform is social, because it embeds the information in a social context. (And we are social animals.) But we also learned about our preferred style, which is the absence of style: the writing that sticks around longest in our memory is what seems to take the least amount of time to create. All art aspires to the condition of the Facebook post. 

Mickes, L., Darby, R. S., Hwe, V., Bajic, D., Warker, J. A., Harris, C. R., & Christenfeld, N. J. (2013). Major memory for microblogs. Memory & cognition, 41(4), 481-489.

The Psychology of the Serenity Prayer

One of the essential techniques of Cognitive-Behavioral Therapy (CBT) is reappraisal. It’s a simple enough process: when you are awash in negative emotion, you should reappraise the stimulus to make yourself feel better.

Let’s say, for instance, that you are stuck in traffic and are running late to your best friend’s birthday party. You feel guilty and regretful; you are imagining all the mean things people are saying about you. “She’s always late!” “He’s so thoughtless.” “If he were a good friend, he’d be here already.”

To deal with this loop of negativity, CBT suggests that you think of new perspectives that lessen the stress. The traffic isn’t your fault. Nobody will notice. Now you get to finish this interesting podcast.

It’s an appealing approach, rooted in CBT’s larger philosophy that the way an individual perceives a situation is often more predictive of his or her feelings than the situation itself. 

There’s only one problem with reappraisal: it might not work. For instance, a recent meta-analysis showed that the technique is only modestly useful at modulating negative emotions. What’s worse, there’s suggestive evidence that, in some contexts, reappraisal may actually backfire. According to a 2013 paper by Allison Troy, et al., among people who were stressed about a controllable situation—say, being fired because of poor work performance—better reappraisal ability was associated with higher levels of depression. 

Why doesn’t reappraisal always work? One possible answer involves an old hypothesis known as the strategy-situation fit, first outlined by Richard Lazarus and Susan Folkman in the late 1980s. This approach assumes that there is no universal fix for anxiety and depression, no single tactic that always grants us peace of mind. Instead, we must think strategically about which strategies to use, as their effectiveness will depend on the larger context.

A new paper by Simon Haines et al. (senior author Peter Koval) in Psychological Science provides new evidence for the strategy-situation fit model. While previous research has suggested that the success of reappraisal depends on the nature of the stressor—it’s only useful when we can’t control the source of the stress—these Australian researchers wanted to measure the relevant variables in the real world, and not just in the lab. To do this, they designed a new smartphone app that pushed out surveys at random moments. Each survey asked their participants a few questions about their use of reappraisal and the controllability of their situation. These responses were then correlated with several questionnaires measuring well-being and mental health.

The results confirmed the importance of strategy-situation fit. According to the data, people with lower levels of well-being (they had more depressive symptoms and/or stress) used reappraisal in the wrong contexts, increasing their use of the technique when they were in situations they perceived as controllable. For example, instead of leaving the house earlier, or trying to perform better at work, people with poorer “strategy-situation fit” might spend time trying to talk themselves into a better mood. People with higher levels of well-being, in contrast, were more likely to use reappraisal at the right time, when they were confronted with situations they felt they could not control. (Bad weather, mass layoffs, etc.) This leads Haines et al. to conclude that, “rather than being a panacea, reappraisal may be adaptive only in relatively uncontrollable situations.”

Why doesn’t reappraisal help when we can influence the situation? One possibility is that focusing on our reaction might make us less likely to take our emotions seriously. We’re so focused on changing our thoughts—think positive!—that we forget to seek an effective solution. 

Now for the caveats. The most obvious limitation of this paper is that the researchers relied on subjects to assess the controllability of a given situation; there were no objective measurements. The second limitation is the lack of causal data. Because this was not a longitudinal study, it’s still unclear if higher levels of well-being are a consequence or a precursor of more strategic reappraisal use. The best way to deal with our emotions is an ancient question. It won’t be solved anytime soon.

That said, this study does offer some useful advice for practitioners and patients using CBT. As I noted in an earlier blog, there is worrying evidence that CBT has gotten less effective over time, at least as measured by its ability to reduce depressive symptoms. (One of the leading suspects behind this trend is the growing popularity of the treatment, which has led more inexperienced therapists to begin using it.) While more study is clearly needed, this research suggests ways in which standard CBT might be improved. It all comes down to an insight summarized by the great Reinhold Niebuhr in the Serenity Prayer:

God, grant me the serenity to accept the things I cannot change,

Courage to change the things I can,

And wisdom to know the difference.                                         

That’s wisdom: tailoring our response based on what we can and cannot control. Serenity is a noble goal, but sometimes the best way to fix ourselves is to first fix the world.

Haines, Simon J., et al. "The Wisdom to Know the Difference Strategy-Situation Fit in Emotion Regulation in Daily Life Is Associated With Well-Being." Psychological Science (2016): 0956797616669086.

How Southwest Airlines Is Changing Modern Science

The history of science is largely the history of individual genius. From Galileo to Einstein, Isaac Newton to Charles Darwin, we tend to celebrate the breakthroughs achieved by a mind working by itself, seeing more reality than anyone has ever seen before.

It’s a romantic narrative. It’s also obsolete. As documented in a pair of Science papers by Stefan Wuchty, Benjamin Jones and Brian Uzzi, modern science is increasingly a team sport: more than 80 percent of science papers are now co-authored. These teams are also producing the most influential research, as papers with multiple authors are 6.3 times more likely to get at least 1000 citations. The era of the lone genius is over.

What’s causing the dramatic increase in scientific collaboration? One possibility is that the rise of teams is a response to the increasing complexity of modern science. To advance knowledge in the 21st century, one has to master an astonishing amount of information and experimental know-how; because we have discovered so much, it’s harder to discover something new. (In other words, the mysteries that remain often exceed the capabilities of the individual mind.) This means that the most important contributions now require collaboration, as people from different specialties work together to solve extremely difficult problems.

But this might not be the only reason scientists are working together more frequently. Another possibility is that the rise of teams is less about shifts in knowledge and more about the increasing ease of interacting with other researchers. It’s not about science getting hard. It’s about collaboration getting easy.

While it seems likely that both of these explanations are true—the trend is probably being driven by multiple factors—a new paper emphasizes the changes that have reduced the costs of academic collaboration. To do this, the economists Christian Catalini, Christian Fons-Rosen and Patrick Gaule looked at what happens to scientific teams after Southwest Airlines enters a metropolitan market. (On average, the entrance of Southwest leads to a roughly 20 percent reduction in fares and a 44 percent increase in passengers.) If these research partnerships are held back by practical obstacles—money, time, distance, etc.—then the arrival of Southwest should lead to a spike in teamwork.

That’s exactly what they found. According to the researchers, after Southwest begins a new route collaborations among scientists increase across every scientific discipline. (Physicists increase their collaborations by 26 percent, while biologists seem to really love cheap airfare: their collaborations increase by 85 percent.) To better understand these trends, and to rule out some possible confounds, Catalini et al. zoomed in on collaborations among chemists. They tracked the research produced by 819 pairs of chemists between 1993 and 2012. Once again, they found that the entry of Southwest into a new market leads to an approximately 30 percent spike in collaboration among chemists living near the new routes. What’s more, this trend towards teamwork showed no signs of existing before the arrival of the low-cost airline.

At first glance, it seems likely that these new collaborations triggered by Southwest will produce research of lower quality. After all, the fact that the scientists waited to work together until airfares were slightly cheaper suggests that they didn’t think their new partnership would create a lot of value. (A really enticing collaboration should have been worth a more expensive flight, especially since the arrival of Southwest didn’t significantly increase the number of direct routes.) But that isn’t what Catalini et al. found. Instead, they discovered that Southwest’s entry into a market led to an increase in higher quality publications, at least as measured by the number of citations. Taken together, these results suggest that cheaper air travel is not only redrawing the map of scientific collaboration, but fundamentally improving the quality of research.

There is one last fascinating implication of this dataset. The spread of Southwest paralleled the rise of the Internet, as it became far easier to communicate and collaborate using digital tools, such as email and Skype. In theory, these virtual interactions should make face-to-face conversations unnecessary. Why put up with the hassle of air travel when there’s Facetime? Why meet in person when there’s Google Docs? The Death of Distance and all that.

But this new paper is a reminder that face-to-face interactions are still uniquely valuable. I’ve written before about the research of Isaac Kohane, a professor at Harvard Medical School. A few years ago, he published a study that looked at the influence physical proximity on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Geography turned out to be a crucial variable: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were located within ten meters of each other, while the least cited papers tended to emerge from collaborators who were a kilometer or more apart.

Even in the 21st century, the best way to work together is to be together. The digital world is full of collaborative tools, but these tools are still not a substitute for meetings that take place in person.* That’s why we get on a plane.

Never change Southwest.

Catalini, Christian, Christian Fons-Rosen, and Patrick Gaulé. "Did cheaper flights change the geography of scientific collaboration?" SSRN Working Paper (2016). 

* Consider a study that looked at the spread of Bitnet, a precursor to the internet. As one might expect, the computer network significantly increased collaboration among electrical engineers at connected universities. However, the boost in collaboration was far larger among engineers who were within driving distance of each other.  Yet more evidence for the power of in-person interactions comes from a 2015 paper by Catalini, which looked at the relocation of scientists following the removal of asbestos from Paris Jussieu, the largest science university in France. He found that science labs that had been randomly relocated in the same area were 3.4 to 5 times more likely to collaborate. Meat space matters.

Do Social Scientists Know What They're Talking About?

The world is lousy with experts. They are everywhere: opining in op-eds, prognosticating on television, tweeting out their predictions. These experts have currency because their opinions are, at least in theory, grounded in their expertise. Unlike the rest of us, they know what they’re talking about.

But do they really? The most famous study of political experts, led by Philip Tetlock at the University of Pennsylvania, concluded that the vast majority of pundits barely beat random chance when it came to predicting future events, such as the winner of the next presidential election. They spun out confident predictions but were never held accountable when their predictions proved wrong. The end result was a public sphere that rewarded overconfident blowhards. Cable news, Q.E.D.

While the thinking sins identified by Tetlock are universal - we’re all vulnerable to overconfidence and confirmation bias - it’s not clear that the flaws of political experts can be generalized to other forms of expertise. For one thing, predicting geopolitics is famously fraught: there are countless variables to consider, interacting in unknowable ways. It’s possible, then, that experts might perform better in a narrower setting, attempting to predict the outcomes of experiments in their own field.

A new study, by Stefano DellaVigna at UC Berkeley and Devin Pope at the University of Chicago, aims to put academic experts to this more stringent test. They assembled 208 experts from the fields of economics, behavioral economics and psychology and asked them to forecast the impact of different motivators on the performance of subjects performing an extremely tedious task. (They had to press the “a” and “b” buttons on their keyboard as quickly as possible for ten minutes.) The experimental conditions ranged from the obvious - paying for better performance - to the subtle, as DellaVigna and Pope also looked at the influence of peer comparisons, charity and loss aversion.  What makes these questions interesting is that DellaVigna and Pope already knew the answers: they’d run these motivational studies on nearly 10,000 subjects. The mystery was whether or not the experts could predict the actual results.

To make the forecasting easier, the experts were given three benchmark conditions and told the average number of presses, or “points,” in each condition. For instance, when subjects were told that their performance would not affect their payment, they only averaged 1521 points. However, when they were paid 10 cents for every 100 points, they averaged 2175 total points. The experts were asked to predict the number of points in fifteen additional experimental conditions.

The good news for experts is that these academics did far better than Tetlock’s pundits. When asked to predict the average points in each condition, they demonstrated the wisdom of crowds: their predictions were off by only 5 percent. If you’re a policy maker, trying to anticipate the impact of a motivational nudge, you’d be well served by asking a bunch of academics for their opinions. 

The bad news is that, on an individual level, these academics still weren’t very good. They might have looked prescient when their answers were pooled together, but the results were far less impressive if you looked at the accuracy of experts in isolation. Perhaps most distressing, at least for the egos of experts, is that non-scientists were much better at ranking the treatments against each other, forecasting which conditions would be most and least effective. (As DellaVigna pointed out in an email, this is less a consequence of expert failure and more a tribute to the fact that non-experts did “amazingly well” at the task.) The takeaway is straightforward: there might be predictive value in a diverse group of academics, but you’d be foolish to trust the forecast of a single one.

Furthermore, there was shockingly little relationship between the credentials of academia and overall performance. Full professors tended to underperform assistant professors, while having more Google Scholar citations was correlated with lower levels of accuracy. (PhD students were “at least as good” as their bosses.) Academic experience clearly has virtues. But making better predictions about experiments does not seem to be one of them.

Since Tetlock published his damning critique of political pundits, he has gone on to study so-called “superforecasters,” those amateurs whose predictions of world events are consistently more accurate than those of intelligence analysts with access to classified information. (In general, these superforecasters share a particular temperament: they’re willing to learn from their mistakes, quick to update their beliefs and tend to think in shades of gray.) After mining the data, DellaVigna and Pope were able to identify their own superforecasters. As a group, these non-experts significantly outperformed the academics, improving on the average error rate of the professors by more than 20 percent. These people had no background in behavioral research. They were paid $1.50 for 10 minutes of their time. And yet, they were better than the experts at predicting research outcomes.

The limitations of expertise are best revealed by the failure of the experts to foresee their own shortcomings. When the academics were surveyed by DellaVigna and Pope, they predicted that high-citation experts would be significantly more accurate. (The opposite turned out to be true.) They also expected PhD students to underperform the professors – that didn’t happen, either – and that academics with training in psychology would perform the best. (The data points in the opposite direction.)

It’s a poignant lapse. These experts have been trained in human behavior. They have studied our biases and flaws. And yet, when it comes to their own performance, they are blind to their own blindspots. The hardest thing to know is what we don’t.

DellaVigna, Stefano, and Devin Pope. Predicting Experimental Results: Who Knows What? NBER Working Paper, 2016.