All posts by Talia Shirazi

Applying Evolutionary Theory to Psychiatry

Our emotions are the result of hundreds of thousands of years of evolutionary pressure and have been described as “Darwinian algorithms of the mind” by evolutionary scientists John Tooby and Leda Cosmides. Though emotions have likely evolved to serve specific adaptive purposes, there are currently several psychiatric diagnoses identifying what is presumed to be ‘pathological’ emotion states such as generalized anxiety disorder (GAD) in the case of excessive anxiety, or major depressive disorder (MDD) in the case of excessive sadness or apathy.

Rather than being pathological, these emotional states could in fact be somewhat adaptive if looked at through an evolutionary lens, as physician Randolph Nesse and evolutionary biologist George Williams argue in their book, “Why We Get Sick: The New Science of Darwinian Medicine.”

Let us take the case of anxiety. Anxiety has likely evolved to keep us away from dangerous situations, and to activate cognition and behaviors to help us escape from dangerous situations we find ourselves in. Nesse and Williams mention “the berry picker who does not flee a grizzly bear” and “the fisherman who sails off alone into a winter storm” (p. 212) to illustrate examples where anxiety is crucial to our survival.

It would then seem that high, constant levels of anxiety would lead to the greatest evolutionary fitness, as individuals always aware of and ready to flee from dangerous situations would have highest rates of survival. Though for the individual it may not be pleasant to constantly experience high levels of anxiety, as Nesse and Williams eloquently and bluntly phrase it, “natural selection cares only about our fitness, not our comfort” (p. 212).

The reason why all of us do not experience high levels of anxiety constantly is explained by its biological costs. The ‘fight or flight’ response associated with anxiety is calorically expensive, which in turn allows less energy expenditure for other processes. Furthermore, a large body of work has suggested negative effects of chronic stress on the body and on the mind. Thus, though perpetual high levels of anxiety would indeed help us guard ourselves from danger, the costs of doing so may be substantial enough as to negate the potential benefits.

The levels of anxiety that have been labeled as pathological have been designated as such by mental health professionals and not by evolutionary scientists, leading to potential differences in the ways physicians and evolutionary thinkers would classify pathological emotion. Large, empirical studies have not been conducted to determine whether individuals diagnosed with GAD actually have lower fitness as compared to individuals without a diagnosis.

Interestingly, looking on the other side of the anxiety spectrum, anecdotal evidence suggests that too little anxiety may jeopardize an individual’s fitness and survival. Such individuals are often unable to accurately assess potential dangers, and more frequently end up in socially and physically undesirable situations. However, there is currently no psychiatric diagnosis for this end of the anxiety spectrum, despite it being the end of the spectrum that may be ultimately more detrimental to the individual.

Throughout their chapter on mental disorders and throughout the rest of the book, Nesse and Williams stress the potential utility of evolutionary theory across a wide range of fields in medicine (see this previous AEPS post on Nesse’s contributions to cancer biology). Before jumping to disrupt a certain natural biological process or emotional/cognitive state, it is important to remain cognizant that such processes and states have been crafted to increase our genetic fitness, and that in some cases, we may perhaps be best off letting the “Darwinian algorithms of the mind” and body run their course.

Genuine Disorders or Environmental Discrepancies? Review of an Evolutionary Psychology Explanation of Female Sexual ‘Dysfunction’

Like many other nonhuman primates, men and women engage in sexual intercourse for a myriad of reasons: to satisfy their own sexual desires, to satisfy the desires of their partners, to gain access to resources, and every once in a while, to procreate. A woman’s ability pass her genes on to healthy offspring contributes to her genetic fitness, and this ability is closely tied to the timing and frequency of sexual intercourse. From an evolutionary perspective, there should then be selection pressure for alleles that are associated with high sexual arousal and desire in women.

Epidemiological data, however, indicate that is not the case—recent estimates have suggested that up to 50% of women experience sexual dysfunction, characterized by dampened sexual arousal or desire, or inability to reliably experience orgasm during intercourse. But if natural selection favors alleles related to high sexual desire and arousal (and subsequently, the creation of offspring), why does the prevalence of sexual dysfunction in women remain so high?

A recent article in Adaptive Human Behavior and Physiology by anthropologist Menelaos Apostolou suggests that these clinical concepts we’ve labeled as ‘dysfunctions’ did not represent genuine dysfunctions in the pre-industrial environments where the majority of human evolution has taken place. In such pre-industrial environments, women’s sexuality was strictly regulated by parental and societal forces. Women were required by their parents to refrain from intercourse outside of the context of marriage, and were often married off to partners chosen by the parents. Once in that marriage, it was not illegal for husbands to force intercourse upon their wives. In this sort of environment, what we today would consider ‘sexual dysfunctions’ were not dysfunctions at all, as they did not contribute to a woman’s reproductive fitness—having high sexual arousal or desire would not significantly modulate a woman’s frequency and timing of sexual activity. In fact, low sexual desire and arousal may have even been a good thing, as such women would not be motivated to seek intercourse before marriage or with partners other than one’s husband.

Fast forward to the post-industrial society we live in currently, and the regulation of women’s sexuality in many parts of the world has changed considerably. Rather than being potentially maladaptive as they were during much of human evolution, high sexual desire and arousal are traits that now actually increase a woman’s reproductive fitness. In a society where women freely choose their mates and regulate their own sexual behavior, those with greater arousal and desire may engage in intercourse more often, and thus be more likely to pass on their genes.

This concept, of traits being disadvantageous currently when they were advantageous in ancestral societies, is called ancestral neutrality. The argument that Apostolou makes is that we have not lived in post-industrial societies with unregulated female sexuality long enough for evolution to catch up and ‘weed out’ alleles for low sexual desire and arousal, which explains the high prevalence of sexual ‘dysfunctions’ reported. However, depending on the extent to which low sexual arousal and desire decrease a woman’s reproductive fitness in this post-industrial context, it is likely that the alleles for these traits will become increasingly rare over time.

Apostolou’s paper highlights the important point that our conceptualizations of ideas such as health and illness are strongly time and culture-dependent. Low sexual arousal and desire have transformed from being advantageous traits in pre-industrial societies to being natural variants in female sexuality in the late 20th century, to then being dysfunctions warranting DSM-V categorizations and development of drugs to ‘cure’ them. (Sidenote: women’s sexual function has not been the only arena in which societal views modulate what we view as normal or abnormal. Rather than being labeled as having a mental illness, individuals with what the DSM-V would categorize as schizophrenia in ancestral societies were often regarded as shamans with significant spiritual and healing powers.) It will be interesting to keep tracking how societies define health and illness as they relate to female sexuality as our conceptualizations of sexuality continually develop.

Stop Counting, Start Collecting: Hormone Measurements in Evolutionary Psychology Research

In recent years, evolutionary psychologists have conducted lab-based and naturalistic studies suggesting that naturally cycling women (i.e. women who are not on hormonal contraceptives, such as the pill) experience a suite of behavioral and cognitive changes depending whether they are in the follicular, ovulatory, or luteal phase of their menstrual cycles. During ovulation, when a woman’s chance of conception is highest, she is likely to report higher levels of sexual desire, have a strong preference for masculine-looking men, and wear certain types of clothing—specifically, red clothing.

In 2013, a study conducted by psychologists Alec Beall and Jessica Tracy found that women at high conception risk (women who self-reported being on days 6-14 of the cycle) were over 3 times as likely as women at low conception risk (women who self-reported being on days 0-5 and 15-28 of the cycle) to wear red or pink shirts. Day of the cycle was determined by counting the number of days since women’s last self-reported menses.

There was just one problem—that “day of the cycle was determined by counting the number of days since women’s last self-reported menses.” This counting method is frequently employed in studies relating cycle phase to behavior because of its ease relative to collecting and assaying saliva samples for hormone concentrations. However, prior to when this study was conducted, there were several reasons to doubt its accuracy in classifying high versus low fertility days, which may make results from studies using this method suspect.

Acknowledging this flaw, evolutionary psychologists Adar Eisenbruch, Zachary Simmons, and James Roney conducted similar analyses to Beall and Tracy, but instead of using the counting method, they collected saliva samples (that were then assayed for hormone concentrations) each time women came into the lab. They then also used the counting method, and examined the concordance between the counting and hormonal methods of conception risk classification.

Using the counting method, there was no difference in the percentage of low and high conception risk women who wore red. When using the hormonal method, however, a significantly higher percent of high conception risk women wore red than did low conception risk women. So, while the use Beall and Tracy’s methods resulted in an inability to replicate their own original findings, the use hormonal methods for conception risk classification resulted in support for high conception risk women being more likely to wear red.

Perhaps more interesting, and certainly more worrisome than this central finding, was the lack of concordance between the counting and hormonal methods of classification—the two agreed in a mere 64% of cases. In other words, for more than 1/3 of the time, these two methods classified women as being in opposite conception risk categories. Further, almost half of the days identified as high conception risk by hormonal methods were classified as low conception risk by the counting method.

That the counting method can differ substantially from hormonal methods of conception risk classification challenges the reliability of some prior findings of cycle phase effects. While it is certain that using the counting method is easier, quicker, and less expensive than collecting and assaying saliva samples, it is unclear whether these advantages outweigh the findings of Eisenbruch et al.  suggesting that the counting method may be incorrect more than third of the time.

It may be that as evidence of the flaws of the counting method continues to accumulate, its use in evolutionary psychology will become increasingly harder to justify, thus opening the door for broader use of more methodologically-sound research practices.

Men’s Mate Preferences: What OkCupid Can Tell Us About Evolutionary Psychology

In 2014, approximately 10 million people used the online dating website OkCupid. While for users this means that billions of messages were exchanged and (probably) thousands of bad dates were had, for OkCupid CEO Christian Rudder, this means there’s an endless pool of data on interpersonal interactions begging to be analyzed.

In his 2014 book Dataclysm, Rudder analyzes data from OkCupid along with other social media sites (e.g. Twitter) to teach us about how we see ourselves, and how we interact with others. While many of his findings are noteworthy, he describes a phenomenon particularly relevant to the potential evolutionary mechanisms that influence mate choice.

Rudder asked men from ages 20 to 50 to rate the attractiveness of women of all ages. He then figured out the age of the women who looked best (i.e. got the highest ratings) to men who were 20, to men who were 21, and so on.

Men who were 20 rated women who were 20 as the most attractive. Men who were 21 rated women who were 20 as the most attractive. Jumping forward a bit, men who were 30 rated women who were 20 as the most attractive, as did men who were 31, as did men who were 46, as did men who were 47…

As you can see, the men in Rudder’s sample prefer more or less the same thing across all ages: women who are 20. From an evolutionary standpoint, this kind of innate preference for women of this age makes some sense: a woman’s chance of conception is highest in her early 20s, and decreases continually thereafter.

So, if we believe that some behaviors and preferences in men are driven by evolutionary mechanisms to facilitate the creation of offspring, men with preferences for women at peak fertility could potentially be more reproductively successful than men with preferences for women who are older and thus less fertile.>

Interestingly, this preference of men for women in their early 20s did not translate to actual behavior on the site. When indicating their preferences, most men said they were looking for someone around their age, and sent the most messages to women within 10 years of their own age.

This disconnect between what men say they want and who they rate as most attractive may be in part due to what women on the site want. While men rate women who are 20 as most attractive regardless of their own age, women rate men who are in their own age range as the most attractive, and indicate that they are looking for someone in that same range.

What does this mean practically? While a 40 year old man messaging many 20-year-old women on the site may get some positive responses, he is much more likely to get them from women in their 30s and 40s, and should take this into account to maximize his changes of finding love (or whatever else he may be looking for on OkCupid).

For more findings about how Twitter has influenced the way we write, which phrases are most common in White OkCupid users and least common in Asian users, and why the variability in your attractiveness rating is more important than your average rating, check out Rudder’s book, Dataclysm: Love, Sex, Race, and Identity–What Our Online Lives Tell Us about Our Offline Selves.

Hormone Measurements in Evolutionary Psychology Research, Part 2: The Prevalence of False Negatives

In a blog post a few weeks ago, I reviewed a study that highlighted the discrepancies between counting and hormonal methods in classifying women as either high or low conception risk in evolutionary psychology research. I concluded that evidence of such discrepancies may “challenge the reliability of some prior findings of cycle phase effects.”

What I mean to suggest with that sentence is not that previous findings of cycle phase effects are false positives, but rather that some null findings in unpublished studies may actually be false negatives, and/or that cycle phase effects may be stronger than currently suggested in the literature.

But why would using a messy, proxy measurement of conception risk (in this case, the counting method) result in false negative findings, or underestimates of a true effect size? Let’s use a simple thought experiment to make this a little clearer:

Say we have a population of 13 males and 13 females, and we are interested in whether there is a significant difference in height between the two sexes. We measure each individual’s height in inches, arrange them in order, and come up with these data. The pink cells represent values for females, and blue for males.

The two distributions overlap a bit, but overall, it looks like males on average are taller than females. We do an independent samples t-test on our small sample, and voila! At p<0.01, our statistical test is significant, and we can conclude that the average height for males and females differ.

Let’s say that for some reason, rather than asking individuals what their biological sex is, we’ll use a proxy measurement to determine biological sex: hair length. We decide that individuals with long hair will be classified as females, and individuals with short hair will be classified as males.

Unfortunately for us, that is a horrible way to differentiate between the biological sexes in this day and age. Plenty of females have short hair, while plenty of males have long hair (especially these days, with the popularity of man-buns reaching an all-time high).

Our data may end up looking a little more like this—our two columns, rather than being ‘female’ and ‘male,’ are ‘long hair’ and ‘short hair,’ because of how we decided to classify sex. Pink cells still reflect values for (truly) biological females, and blue for (truly) biological males.

The mean heights for these two groups still aren’t the same, but we do the same independent samples t-test that we did earlier, and our p value (p=0.12) is no longer statistically significant. This would lead us to conclude that there is no height difference between females and males; however, since we know this is not true, that conclusion would be a false negative.

In the thought experiment above, about 31% of the total sample was misclassified by sex, and this magnitude of misclassification was enough to lead us to a false negative finding. Looking at cycle phase research specifically, classification of days as being either low conception or high conception using the counting method may be incorrect up to 36% of the time when compared to more accurate, hormonal methods. While most of the women classified as high conception risk by counting methods are classified correctly and thus display a specific phenotype in behavior or preferences, mistakenly including low conception risk women (who display a different phenotype) in that group interferes with our ability to truly understand the full extent and magnitude of cycle phase effects.

Now, it has been suggested that some previously reported findings are false positives (rather than false negatives) due to something called ‘researcher degrees of freedom.’ Because the days of the menstrual cycle considered high or low conception risk days are not agreed upon, the classification schema used by a team of researchers to distinguish between phases of the menstrual cycle is in part arbitrary (see this article for a great chart showing the variability among studies in the way phases of the cycle are defined). If statistically significant cycle phase effects are not observed when using one classification schema, it could be that researchers change the days they consider to be high and low risk, and do so until the desired effect is significant.

Though this is possible, meta-analyses and examination of p-curves suggest that this is not the case, and that further inquiry on the extent and breadth of changes in behavior and cognition over the menstrual cycle is warranted.

Note: For those reading who are as interested in counting and hormonal methods of conception risk classification as I am, check out this cool recent article in Evolution and Human Behavior.

Special thanks to Adar Eisenbruch, a current evolutionary psychology graduate student, for his guidance on topics discussed in this post.

Sex Disparities in the Workplace: Is Competitiveness to Blame?

It’s no secret—across many cultures, men and women aren’t equal in the workplace. Men are more likely to hold high positions, and earn higher salaries than their women counterparts on most rungs of the corporate ladder. A study performed by Drs. Corin Apicella and Anna Dreber, published in this September’s issue of Adaptive Human Behavior and Physiology, suggests that that some of these workplace disparities may exist because men are more willing to engage in competition than are women, though this willingness to compete may differ depending on the type of task.

Participants were all members of the Hadza hunter-gatherer group, which is a group living in remote areas of Tanzania. The Hadza population is often studied when determining whether certain psychological traits, like mate preferences or competitiveness, may have been present in our early ancestors.

Participants engaged in 3 different tasks: a gender-neutral task, a female-centric task, and a male-centric task. Results suggested that:

“Hadza boys and men are more competitive than Hadza girls and women. This difference, however, is only significant for the gender-neutral task (skipping rope) and the male centric-task (handgrip strength)”.

However, when it came to actual performance,

“Boys and men are significantly more competitive than girls and women in skipping rope, even though they perform equally well when it comes to both practice jumps and actual performance. The sex difference in competitiveness found for handgrip strength, with men competing more than females, is less surprising since men are typically stronger than women.”

In the female-centric task (bead collection), women performed better on the task than men, but they did not display significantly different levels of competitiveness than men.

The sex differences in competitiveness observed in the Hadza population across all age groups, though only present in certain tasks, tend to support the idea that

“Financial and labor outcome disparities… may, in part, result from sex differences in economic preferences such as willingness to compete.”

While we have previously assumed the existence of evolutionary sex differences in competitiveness in humans, these assumptions have come largely from work done in nonhuman primates. Some prior work in humans has suggested sex differences in competitiveness, though Apicella and Dreber provide us with some of the first data on these sex differences across different task types.