Monday, February 15, 2016

A wake-up call for PhD education in biology


For some people, going to graduate school may be an important experience just for the opportunity to explore interesting and important questions, to satisfy an intrinsic and pressing curiosity about the world or about life, and/or how to make them better.  A percentage of these people, at the PhD level, manage to turn that quest into a lifetime research career similar to that of their grad school mentors — in academia.  

Today however, that percentage (in the life sciences at least), is very small and shrinking (as a few mouse clicks on Google search will quickly show).  Another percentage, also relatively small, will find employment as researchers in government or in the private sector (most of the latter positions require only MSc or Bachelor's qualifications).  Overall then, the news looks bad:  there is now a large oversupply of PhD students spending typically about five years of their lives as research apprentices within universities, training to be career researchers that the vast majority of them will never be.  And all while living below the poverty line.

Clearly undergraduates need to think twice and hard about what they can realistically expect to get from going on to graduate school, especially in doctoral studies.  But as the grad students in my own department have been asking lately: maybe universities should also think about how to change what they can expect to get.

This calls on universities to revisit the definitions of their ‘learning outcomes’ (LOs) for graduate education, particularly for PhDs.  Traditionally, these LOs (at least in my own field of Biology) are virtually all about preparing students to become frontline researchers: asking good questions, collecting good data, making important discoveries, and publishing them vigorously.  Recognizing that most of them will never be directly involved with these activities after graduation (and will consequently soon thereafter be largely out of touch and inexperienced with the latest advances in methodology), PhD students are now asking (and doubting) whether — after five or more years of getting groceries from the food bank — they will at least have good LOs associated with other kinds of broader and peripheral expertise (e.g. in networking, collaboration and interpersonal skills, teaching, budget management, grantsmanship, people management, and other workplace ‘smarts’) that will equip them (and make them competitive) for other kinds of employment, e.g. as corporate executives, teachers, university/college administrators, supervisors in government, and managers in industry — positions in which they will inevitably not be called ‘researchers’.   

Universities then need to address an important question:  Does the ordinary working environment of grad school not already include sufficient opportunity for students to get these ‘broader skills’ LOs simply by ‘watching, asking, and doing’ in the course of routine research activities and interactions with colleagues and supervisors?  If the answer is no, then there is a second and tougher question to address:  How can universities do a better job of delivering these ‘broader skills’ LOs for PhD students, without compromising other things that universities aim to do?

An important consideration here is the perspective of the faculty supervisor.  Recruiting graduate students is part of the employment obligation of faculty, but only secondarily.  Faculty have graduate students mainly because they need them to fulfil one of their more primary employment obligations (and career goals):  to publish research (a lot of it).  Usually this involves competing successfully as a PI ('Principle Investigator') for research grants (especially NSERC, in Canada) that will pay for research costs.  And in order to accomplish the latter they need a team of research collaborators to spend the grant money on and thus generate publishable research.  And in order to win these grants, NSERC requires that the team consists mostly of members that will receive training as HQP (‘Highly Qualified Personnel’) — particularly, graduate students, and particularly with evidence that publication success for PhD students has been effective for their success in landing university postdoctoral and tenure-track jobs.  

The expected training involved here will normally include, in varying degrees, the ‘broader skills’ LOs mentioned above.  But one thing is certain:  the priorities and motivations of most supervisors will necessarily be driven to a very large extent by the accomplishment that is most rewarded by the university employer (and of course is also most important to the supervisor’s reputation) — i.e. whatever it takes to generate a high quantity and/or quality of published research.  This product is probably correlated to some extent with good mentoring of ‘broader skills’ for the grad students within a lab.  But it need not be, and probably isn’t strongly correlated.  Instead, publication success, and hence the employer reward to the faculty member (tenure, promotion, salary increases) will be strongly correlated with the number of graduate students that he/she has supervised, and the proportion that go on to obtain academic positions. 

This necessarily means that the most conspicuous LOs that grad students can presently expect to get will be as research apprentices — essentially, to become career researchers like their supervisors — with some ‘broader skills’ of course thrown in (including from grad courses, or by ordinary osmosis) — but only as time permits, and to the extent that they do not compromise the reputational and employer rewards to the supervisor. 

The current PhD graduate education experience then — as with the faculty job experience — is a product of the culture of academia.  Changing the first will require changing the second, and neither can be changed without changing the culture. 

If change is needed, and if it is going to happen, two things will be required from universities: (1) consultation with graduate students to better define, and/or revise (and publish) the expected learning outcomes of a PhD graduate education; and (2) ensuring that these LOs have substance and are taken seriously, by finding a way to make faculty supervisors accountable for delivering them.  But these measures will never get off the ground as long as granting agencies, like NSERC, continue — as part of the adjudication criteria for grant applications — to count how many graduate students an applicant has supervised, and how many have gone on to post-doctoral or tenure-track positions in academia.

Saturday, January 23, 2016

Same sex attraction — A Darwinian paradox, no more

Like most human behavioural traits, a preference for same-sex sexual relationships can be informed in part by learning and environmental / developmental experiences.  A role for genes is also now abundantly clear.  So-called ‘gay genes’ have not yet been precisely identified, but pedigree and twin studies have shown that homosexuality tends to run in families (reviewed in Ngun etal. 2011), and a recent genetic analysis of 409 pairs of gay brothers links sexual orientation in men with particular regions of the human genome (Sanders et al. 2015).  The pressing question then — described often as a ‘Darwinian paradox’ — is how do we account for the common occurrence of homosexuality in evolutionary terms, given that it would seem to present a severe limitation on evolutionary fitness through one’s direct lineage? Several explanations and speculations have been offered (see the very accessible review in Barash 2012) — and many illustrate, as the saying goes: ‘things are not always as they seem’.

For male homosexuality, one of the best explanations so far comes from recent studies suggesting that female relatives of gay men generally have more offspring than the female relatives of straight men.   In other words, there are genetic factors transmitted through the maternal line (partly linked to the X-chromosome) that increase the probability of becoming homosexual in males, but they promote higher fecundity in females (Camperio-Ciani et al. 2004, Iemmola and Camperio-Ciani 2009).  Hence, the genetic factors that “… influence homosexual orientation in males are not selected against because they increase fecundity in female carriers, thus offering a solution to the Darwinian paradox and an explanation of why natural selection does not progressively eliminate homosexuals” (Iemmola and Camperio-Ciani 2009).


A similar hypothesis (that applies to either male or female homosexuality) is suggested by Zietsch et al. (2008):

“The genes influencing homosexuality have two effects.  First, and most obviously, these genes increase the risk for homosexuality, which ostensibly has decreased Darwinian fitness.  Countervailing this, however, these same genes appear to increase sex-atypical gender identity, which our results suggest may increase mating success in heterosexuals. This mechanism, called antagonistic pleiotropy, might maintain genes that increase the risk for homosexuality because they increase the number of sex partners in the relatives of homosexuals.” … “The traits most reliably associated with homosexuality relate to masculinity–femininity; homosexual men tend to be more feminine than heterosexual men, and homosexual women tend to be more masculine than heterosexual women.”


In other words, this ‘sex atypicality’ may be advantageous when expressed in heterosexuals.  Do some (perhaps many) females tend to be more attracted to males with certain feminine behavioral traits such as tenderness, considerateness, and kindness?  In this study, the results indeed show that “... psychologically masculine females and feminine men are (a) more likely to be nonheterosexual but (b), when heterosexual, have more opposite-sex sexual partners” (Zietsch et al. 2008).


Another, more general hypothesis for homosexuality is that same-sex attraction never really imposed a significant penalty on fitness in our deep ancestral past, because heterosexual sex was still routinely practiced in spite of it.  There are two very different contexts in which this effect would be expected: 

Bivariate trait space continuum for sexual orientation (Aarssen 2015)
The first obtains from just straightforward bisexuality, i.e. where ancestral sex lives commonly involved a mix of both same-sex and heterosexual activity but in varying proportions (informed in part by genotypic variability) ranging from bisexual with a more dominant opposite-sex attraction, to bisexual with a more dominant same-sex attraction.  But random mating within this mix would also have produced genetic variants that informed strictly homosexual as well as strictly heterosexual orientations — with only the latter of the two strongly favoured by natural selection. 

Nevertheless, despite being strongly disfavoured by natural selection, strict homosexuality would have persisted in low frequency simply because of genetic factors informing same-sex attraction that were inherited from bisexual ancestors.  Under this hypothesis, homosexuality is not really a Darwinian paradox at all; instead it, along with asexuality, were just periodic maladaptive genotypic by-products of ancestral gene transmission.


Most ancestral bisexuals, however, were probably female. Research has shown that men are generally attracted to one sex or the other, whereas women are more likely than men to have a bisexual orientation. According to one hypothesis, a ‘fluid sexuality’ that enabled same-sex sexual behavior in women made it easier for women to raise children together. 


Painting of King Solomon and his wives
by Giovanni Venanzi di Pesaro (1627-1705)
This, according to Kuhle (2013): "… would have been particularly beneficial to ancestral women when their male mates were unable to adequately care, protect, and provide because they were injured, away on prolonged hunts, or preoccupied finding, courting, and mating with other women.  The latter scenario was particularly likely to occur within polygynous mating systems. ...



If so, it is possible that men’s relative lack of aversion to a female mate’s homosexual, rather than heterosexual, affair … and men’s common fantasy of simultaneously mating with multiple women … is an outgrowth of a male psychology designed to promote their mates’ same-sex sexual behavior.”


The latter would have commonly benefitted the offspring of not just bisexual women, but also the men who fathered these offspring, many of whom were often not around to help raise them.  Ironically, therefore, for many of our male ancestors, it was probably better for their own genetic fitness, if their partners were bisexual.
The second context for predicting successful heterosexual practice (and hence gene transmission) — despite the presence of same sex attraction — applies even in the case of strong preference for same-sex encounters, including strict homosexuality.  According to what we might call the ‘failed disfavouring selection’ hypothesis (Aarssen 2015), female homosexuality probably never had widespread opportunity to be strongly disfavoured by natural selection.  This is because, throughout much (probably most) of human history, males were to a large extent in control of the sexual activity and fertility of females.  Many or most women, therefore, were essentially forced — by patriarchal subjugation, socio-cultural expectations, and / or religious imperatives — to mate with men and bear their (frequently many) offspring, regardless of their sexual orientation. [The same would also have been true regardless of the intensity of female sexual impulse (drive)]. 


Accordingly, so called ‘gay genes’ in many of our female ancestors — including those that might have been inherited by their sons as well as their daughters — were never significantly limited in their transmission success to future generations.  And so in this context, the prevalence of homosexuality today is (again) not really a Darwinian paradox at all.

Importantly however, women more than ever are now in control of both their fertility and their sex lives, and their empowerment for this and other basic human rights continues to grow rapidly on a global scale.  For our predecessors, choosing successfully to be a practicing homosexual meant zero gene transmission through direct lineage.  But today this is not necessarily the case, with reproductive technologies for sperm banks and in-vitro fertilization (and perhaps, in the future, human cloning).  But unless the latter become widely practiced as a popular cultural norm, the widespread mating and reproductive freedom for women today means that selection against an exclusively or predominantly lesbian orientation may soon be ramping up (Aarssen & Altman 2006).  If female bisexuality, remains alive and well, however, so also would some homosexuality (because of genetic inheritance from bisexual maternal ancestors).

The above scenario is a good example of how some traits — like homosexuality, the generally weaker sexual impulse in females compared with males, and the child-free culture — rather than a consequence of being favoured by natural selection in the ancestral past, may instead be a consequence of not having been disfavoured by natural selection.


References

Aarssen LW (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen’s University, Kingston.

Aarssen LW, Altman S (2006) Explaining below-replacement fertility and increasing childlessness in wealthy countries: Legacy drive and the “transmission competition” hypothesis. Evolutionary Psychology 4: 290-302.

Barash DP (2012) Homo Mysterious: Evolutionary Puzzles of Human Nature. Oxford University Press, Oxford.

Camperio-Ciani A, Corna F, Capiluppi C (2004) Evidence for maternally inherited factors favouring male homosexuality and promoting female fecundity. Proceedings of the Royal Society of London, Series B: Biological Sciences 271: 2217–2221.

Iemmola F, Camperio-Ciani A (2009) New evidence of genetic factors influencing sexual orientation in men: female fecundity increase in the maternal line. Archives of Sexual Behavior 38: 393–399.

Kuhle BX (2013) Born Both Ways: The alloparenting Hypothesis for sexual fluidity in women. Evolutionary Psychology 11: 304-323.

Ngun TC, Ghahramani N, Sa´nchez FJ, Bocklandt S, Vilain E (2011) The genetics of sex differences in brain and behavior. Frontiers in Neuroendocrinology 32: 227–246.

Sanders AR, Martin ER, Beecham GW, Guo S, Dawood K, Rieger G, Badner JA, Gershon ES, Krishnappa RS, Kolundzija AB, Duan J, Gejman PV, Bailey JM (2015) Genome-wide scan demonstrates significant linkage for male sexual orientation. Psychological Medicine 45: 1379-1388.

Zietscha BP, Morley KI, Shekar SN, Verweij KJH, Keller MC, Macgregor S, Wright MJ, Bailey JM, Martin NG (2008) Genetic factors predisposing to homosexuality may increase mating success in heterosexuals. Evolution and Human Behavior 29: 424–433.

Sunday, December 27, 2015

Conundrums of moral obligation

Does one have greater personal obligation to the welfare of some individuals over others?  In my recent book (Aarssen 2015), I explore this question in term of its evolutionary roots. The answer is clearly yes for most people, at least in practice, and evolu-tionary theory explains why — based on genetic relatedness.  

I care immensely about my own personal welfare because I am 100% related to myself — ancient survival instinct in action.  But one’s sense of obligation to others generally falls off with decreasing genetic relatedness.  Hence, most people are motivated to come to the aid of an offspring or a sibling in need.  Commonly, they will also offer help — although less readily — to more distantly related kin.  This bias based on kinship (nepotism) is cross-cultural, and generally accepted as moral, without debate; it is ‘hard-wired’ in the ‘selfish genes’ of humans. 

But a moral dilemma unfolds as we move further down the genetic-relatedness scale, where we encounter additional levels of in-group bias — most notably based on racial membership (racism), and further down based on species membership (speciesism).  These are all anciently evolved components of human nature because in-group bias (including other forms based on nationalism, xenophobia, enthnocentrism, religion, and other cultural worldviews) generally rewarded gene transmission success in our deep ancestral past. However, they have commonly received variable ratings in terms of morality, and the dilemma then lies in defining (or in whether it is even possible to define) an objective basis for moral standards.

On the one hand, the bounds of morality are shaped in part by social learning within local cultures.  In other words, societies have always decided for themselves what is unacceptable, immoral, and unlawful for the best interests of the ‘public good’ (and some groups have made better decisions — for their own welfare — than others).  And this is why Darwinism can never be used to justify or excuse behaviours (e.g. sexual harassment / abuse and other crimes) that violate or compromise human rights or public safety.   Moral and social responsibility always trump genetic bequeathal.

At the same time, however (thankfully), evolution has also given us what can only be described as a ‘moral instinct’.  This is because what is good for the best interests (prosperity) of the social group — e.g. in the case of certain agreed-upon moral codes of conduct — is normally contingent on this also being in the best interests of gene transmission success for at least most of the resident individuals.  In other words, traits influenced by genes have commonly shaped the cultures that people like — because they were (commonly) cultures that rewarded ancestral gene transmission success.   In pondering on morality, Darwin (1859) wrote:  “As soon as this virtue is honored and practiced by some few men, it spreads through instruction and example to the young, and eventually becomes incorporated in public opinion.”  Importantly however, humans evolved a moral instinct not in order to deliver us morality per se — only fitness.  As Alexander (1985) put it: “Although morality is commonly defined as involving justice for all people, or consistency in the social treatment of all humans, it may have arisen for immoral reasons, as a force leading to cohesiveness within human groups but specifically excluding and directed against other human groups with different interests.”

A comparison of racism versus speciesism particularly shows the confounds of defining moral standards.  Both — like nepotism — have been regarded as largely moral throughout much of human history.  But in most cultures today, racism is of course widely regarded as immoral, because it is never in the best interests of modern society. It undermines the ‘common good’, plus it conflicts with our evolved inclinations to show kindness and helpfulness towards fellow humans.  And yet, racism often still rears its ugly head. In contrast, speciesism today is still largely regarded as moral by the general public, and the philosopher might reasonably ask why — i.e. along a continuum of genetic relatedness, why should an immoral in-group bias (racism) be sandwiched between two (nepotism and speciesism) that are traditionally considered moral (e.g. see Lawlor 2012)?


Clearly the harvesting and exploitation of animals for human consumption has been a major boost to human prosperity and genetic fitness ever since our distant ancestors became hunters.  Speciesism, however has come under increasing attack in recent years over concerns about the ethical treatment of animals used in modern medical research and food production.  And a growing number of people now regard speciesism to be just as immoral as racism (e.g. http://speciesismthemovie.com/).  It will be interesting to watch how this movement unfolds, and to consider whether it has evolutionary roots. For example, has our empathic instinct evolved to become so acute that the emotion involved (perhaps partially modulated by social learning), has started recently to spill over somewhat towards the plight of individuals belonging to other species?  I wonder whether this might be connected with our uniquely human mortality salience and self-impermanence anxiety (Aarssen 2015). Cave (2014) offers an interesting perspective:

“This horror at the death of other creatures is intimately bound up with horror at the prospect of one’s own demise.  Flies come and go in countless masses, mostly beyond my sight and care.  But when something happens that causes me to empathise, to become the fly, then its death becomes terrible.  As the poet William Blake realised when he, too, carelessly squashed an insect:

     Am not I
     A fly like thee?
     Or art not thou
     A man like me?”



References

Aarssen LW (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen’s University, Kingston.

Alexander RD (1985) A biological interpretation of moral systems. Zygon 20: 3-20.

Cave S (2014) Not nothing: The death of a fly is utterly insignificant – or it’s a catastrophe. How much should we worry about what we squash? Aeon Magazine. http://aeon.co/magazine/philosophy/how-much-should-we-worry-about-death/

Darwin C (1859) On the Origin of Species. Facsimile of the first edition. Harvard University Press, Cambridge.

Lawlor R (2012) The ethical treatment of animals: The moral significance of Darwin’s theory. In, Brinkworth M, Weinert F (Eds). Evolution 2.0. Implications of Darwinism in Philosophy and the Social and Natural Sciences. Springer, Verlag.
  

Sunday, November 1, 2015

Why are we still searching for evidence of natural selection?

During the last century, especially the second half, many biologists (including myself) spent their careers in search of supporting evidence for Darwin’s theory of natural selection. It was exciting to witness the seemingly endless accum-ulation of published data from both laboratories and natural habitats everywhere, and for everything from bacteria to flowering plants to mammals.  We couldn’t get enough. Drunk on the exuberance of the ‘modern synthesis’, we all embraced an unquestioned faith in a new and exciting mantra:  “Nothing in biology makes sense except in the light of evolution” (Dobzhansky 1973).   It was true.

And so … Darwin was right:  natural selection works.
It’s everywhere.  Like gravity.

But the exuberance still has an oddly powerful hold for many evolutionists. Even more than a century and a half after publi-cation of the ‘The Origin’, they seem never to tire of hearing another new report (added to the now brimming pile) showing that natural selection works — even showing it exactly how most researchers already expect it to.  
Hypothesis:  The insects are mostly green because the lizard is better at spotting the brown ones.  'Eureka!  Let's do an experiment to show this!  We could even check on the physiology and genes involved.  It's bound to show that Darwin was right!' 

It’s like they are addicted to the buzz of another discovery ‘fix’, because they know in advance that it always delivers.  [After all, Darwin was right].  With each new testimonial for an apparent consequence of natural selection — e.g. with details of how a particular trait is adaptive, or the mechanism for how genes inform it — there is continuing praise and blessing from the fellowship. It doesn't matter that (as is often the case) essentially the same interpretation for the same trait (in different species or from different habitats) had already been published previously — all fitting neatly in line with standard predictions of accepted theory.  These testimonials take place at annual conferences, and weekly study groups and meetings with guest speakers for evolutionary biology congregations, attended faithfully by many followers within university departments everywhere.  Even without prayers and hymns, the atmosphere is not unlike a worship service — for Darwinism — with a sea of heads nodding in approval, frothing at the mouth.

The cherished convictions of the congregation are especially strengthened when the newly reported study involves a particularly elegant experimental design, an application of new technology, a novel or expensive method of data collection, or complex and rigorous data analyses.  Hallelujah!  Praise the Lord!  A new, more potent fix with a stronger buzz!  It doesn’t even seem to matter that the evidence presented often involves a study design from which alternative results or interpretations would have been implausible, or essentially impossible to obtain.  If a study does happen to turn up negative results, they normally end up in a file drawer, unpublished — because the only acceptable explanation is that the experimental design or data collection must have been improperly carried out.  [After all, Darwin was right].  And so, routinely, investigations are conducted for which the answers (positive ones) are already known in advance.  [... not a great track record for bolstering integrity and public confidence in science].

From Darwinism itself, I suppose, we should expect nothing less.  For our ancestors, identification and alignment of one’s views and beliefs with those of a growing contingent of others not only rewarded their intrinsic need for a sense of belonging and self-esteem.  It also undoubtedly provided assurance — often grounded in truth — that this conspicuous group of contemporaries was probably on to something important (without needing to know why) that ended up rewarding reproductive success.  [Today, of course, researchers know why, in terms of strategy for rewarding tenure, promotion, and status from an expanding publication record that corroborates the cherished convictions of the congregation].

But surely we need to move on in the 21st Century. We don’t need new reports showing that gravity works, or showing that the earth revolves around the sun.  Similarly, when a new report arrives showing essentially nothing but another example of how natural selection works in ways that we all could have guessed — consistent with what established theory predicts — I am so not impressed.

Is there any valuable research left to do on natural selection?  Certainly.  This happens when there are convincing reasons to hypothesize (with potential to test) that natural selection may (or does) work in ways that would be (or is) surprising, (or, surprisingly, isn't working at all) — i.e. running counter to expectations based on published theory (or without being readily predicted by that theory). It can also happen when the research is placed conspicuously in the context of contributions that have potential to address needs or goals that are important to society at large.  In many cases, the latter is easy to do, but commonly ignored (usually because it is regarded as uncool by the elites).

Please, if you must search for more evidence that Darwin was right about natural selection, at least make sure it passes the test for conspicuous and meaningful valuation branding. Otherwise, it runs a growing risk of looking like something stuck in the 20th Century.  I need regular reminders of this myself; as an ardent evolutionist, I can be as rabid as they come.


References

Dobzhansky T (1973) Nothing in biology makes sense except in the light of evolution. American Biology Teacher 35 (3): 125–129.

Saturday, October 24, 2015

Why are we sad at funerals?

Several characteristic features of human nature, social life, and culture show signs of having been shaped in part by genetic inheritance, resulting from Darwinian selection in our ancestral past.  These evolutionary roots are explored in my new book (Aarssen 2015), where an interesting example concerns the function of funeral ceremonies, and the usual human emotional responses to them.  “The cardinal fact is that all people everywhere take care of their dead in some fashion, while no animal does anything of the sort. … Only a being who knows that he himself will die is likely to be really concerned about the death of others” (Dobzhansky 1967).

Do funerals make us sad then because we will miss the deceased, or because funerals force us to confront the morbid fear of our own eventual demise? Do we embrace these rituals because it honours the life and memory of the deceased, or because it provides an effective remedy for the terrifying reminder that — like the deceased — ‘my life is also impermanent’.



Perhaps funerals then serve mainly to alleviate a crippling worry:  ‘Will I be remembered?  Will I leave something of myself for the future?’  If so, then these emotions might be best understood in terms of a uniquely human Legacy Drive.  As Pinker (1997) put it:  “Ancestor worship must be an appealing idea to those who are about to become ancestors.
Also interesting is how the sadness and anxiety evoked by such confrontations with death are routinely mitigated by domains for Leisure Drive.  When the funeral is over, people are commonly anxious to forget it as soon as possible, by intentionally seeking out pleasurable, anxiety-buffering distractions — like eating, alcohol consumption, going shopping, and having sex (Mandel and Smeesters 2007, Schultz 2008, Birnbaum et al. 2011, Goldenberg 2013).

As Albert Camus (1956) put it: “Man is the only creature who refuses to be what he is.”


References

Aarssen L (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen's University, Kingston.

Birnbaum G, Hirschberger G, Goldenberg J (2011) Desire in the face of death: Terror management, attachment, and sexual motivation. Personal Relationships 18 1-19.

Camus A (1956) The Rebel. Alfred A.Knopf, New York.

Dobzhansky T (1967) The Biology of Ultimate Concern. The New American Library, New York.

Goldenberg JL (2013) Immortal objects: The objectification of women as terror management. Objectification and (De)humanization 60: 73-95.

Mandel N, Smeesters D (2007) Shop ‘til you drop: the effect of mortality salience on consumption quantity. Advances in Consumer Research 34: 600-601.

Schultz N (2008) Morbid thoughts make us reach for the cookie jar. New Scientist 198: 12.

Pinker S (1997) How the Mind Works. Norton, New York.

Monday, September 28, 2015

Why most published data are not reproducible

Public confidence in science has suffered another setback from recent reports that the data from most published studies are not reproducible (Baker 2015, Bartlett 2015, Begley et al 2015, Jump 2015).  The implication from this, statistically, is unavoidable, and troubling to say the least:  it means that the results of at least half of all research that has ever been published, probably in all fields of study, are inconclusive at best. They may be reliable and useful, but maybe not.  Mounting evidence in fact leans toward the latter (Ioannidis 2005, Lehrer 2010, Hayden 2013).

Moreover, these inconclusive reports are likely to involve mostly those that had been regarded as especially promising contributions — lauded as particularly novel and ground-breaking.  In contrast, the smaller group that passed the reproducibility test is likely to involve mostly esoteric research that few people care about, or so-called ‘safe research’: studies that report merely confirmatory results, designed to generate data that were already categorically expected, i.e. studies that aimed to provide just another example of support for well-established theory — or if not the latter, support for something that was already an obvious bet or easily believable anyway, even without data collection (or theory).  [A study that anticipates only positive results in advance is pointless.  There is no reason for doing the science in the first place; it just confirms what one already knows must be true].    

Are there any remedies for this reproducibility problem?  Undoubtedly some, and researchers are scrambling, ramping up efforts to identify them [see Nature Special (2015) on Challenges in irreproducible research].  Addressing them effectively (if it is possible at all) will require nothing short of a complete re-structuring of the culture of science, with new and revised manuals of ‘best practice’ (e.g. see Nosek et al. 2015, Sarewitz 2015; and see Center for Open Science: Transparency and Openness Promotion (TOP) Guidelines). 


Some of the reasons for irreproducibility, however, will not go away easily.  In addition to outright fraud, there are at least six more — some more unscrupulous than others:


(1) Page space restrictions of some journals.  For some studies, results cannot be reproduced because the authors were required to limit the length of the paper.  Hence, important details required for repeating the study are missing.   







(2) Sloppy record keeping / data storage / accessibilityResearchers are not all equally meticulous by nature.  In some cases, methodological details are missing inadvertently because the authors simply forgot to include them, or the raw data were not stored or backed up with sufficient care.








(3) Practical limitations that prevent ‘controls’ for everything that might matter. For many study systems, there are variables that simply cannot be controlled.  In some cases, the authors are aware of these, and acknowledge them (and hence also the inconclusive nature of their results).  But in other cases, there were important variables that could have been controlled but were innocently overlooked, and in still other cases there were variables that simply could not have been known or even imagined.   The impact of these ‘ghost variables’ can severely limit the chances of reproducing the results of the earlier study. 


(4) Pressure to publish a lot of papers quickly. Success in academia is measured by counting papers. Researchers are often anxious, therefore, to publish a ‘minimum publishable unit’ (MPU), and as quickly as possible, without first repeating the study to bolster confidence that the results can be replicated and were not just a fluke.  Inevitably of course, some (perhaps a lot) of the time, results (especially MPUs) will be a fluke, but it is generally better for one’s career not to take the time and effort to find out (time and effort taken away from cranking out more papers).  When others do however take the time and effort to check, more incidences of irreproducible results make the news headlines — news that would be a lot less common if the culture of academia encouraged researchers to replicate their own studies before publishing them.


(5) Using secrecy (omissions) to retain a competitive edge.   As Collins and Tabak (2014) note:  “…some scientists reputedly use a 'secret sauce' to make their experiments work — and withhold details from publication or describe them only vaguely to retain a competitive edge.”






(6) Pressure to publish in ‘high end’ journals.  Successful careers in academia are measured not just by counting papers, but especially by counting papers in ‘high end’ journals — those that generate high Impact Factors because of their mission to publish only the most exciting findings, and disinterest in publishing negative findings. Researchers are thus addicted to chasing Impact Factor (IF) as a status symbol within a culture that breeds elitism — and the high end journals feed that addiction (many of them while cashing in on it). The traditional argument for defending the value of 'high-end branding' for journals (supposedly measured by high IF) is that it provides a convenient filtering mechanism allowing one to quickly find and read the most significant research contributions within a field of study.  In fact, however, the IF of a ‘high-end’ journal says very little to nothing about the eventual relative impact (citation rates) for the vast majority of papers published in it (Leimu and Koricheva 2005). A high journal IF, in most cases, is driven by publication of only a small handful of 'superstar' articles (or articles by a few 'superstar' authors). Journal 'brand' (IF) therefore has only marginal value at best as a filtering mechanism for readers and researchers.  

Moreover, addiction to chasing Impact Factor, despite not delivering what its gate-keepers proclaim, is ironically at the heart of the irreproducibility problem — for at least two reasons:  First, it fuels incentives for researchers to be biased in the selection of study material (e.g. using a certain species) that they already have reason to suspect, in advance, is particularly likely to provide support for the 'favoured hypothesis’ — the 'exciting story'.  Any data collected for different study material that fail to support the 'exciting story’ must of course be shelved — the so called ‘file drawer’ problem — because high end journals won’t publish them.

Second, addiction to chasing IF can motivate researchers to report their findings selectively, excluding certain data or failing to mention the results of certain statistical analyses that do not fit neatly with the ‘exciting story’.  This may, for example, include ‘p-hacking’ — searching for and choosing to emphasize only analyses that give small p-values.  And obviously there is no incentive here to repeat one’s experiment, ‘just to be sure’; self-replication would run the risk that the ‘exciting story’ might go away.
  


All of this means that the research community and the general public are commonly duped — led to believe that support for the 'exciting story’ is stronger than it really is.  And this is revealed when later research attempts, unsuccessfully, to replicate the effect sizes of earlier supporting studies.  Negative findings in this context then, ironically, become more ‘publishable’ (including for study material that was already used earlier and that ended up in a file drawer somewhere).  Hence, empirical support for an exciting new idea commonly accelerates rapidly at first, but eventually starts to fall off ('regression to the mean') as more replications are conducted — the so called ‘decline effect’ (Lehrer 2010).

The progress of science happens when research results reject a null hypothesis, thus supporting the existence of a relationship between two measured phenomena, or a difference among groups — i.e. a ‘positive result’.  But progress is also supposed to happen when research produces a 'negative result' — i.e. results that fail to reject a null hypothesis, thus failing to find a relationship or difference.  Science done properly then, with precision and good design but without bias, should commonly produce negative results, even perhaps as much as half of the time.  But negative results are mostly missing from published literature.  Instead, they are hidden in file drawers, destroyed altogether, or they never have a chance of being discovered.  Because positive results are more exciting to authors, and especially journal editors, researchers commonly rig their study designs and analyses to maximize the chances of reporting positive results.

The absurdity of this contemporary culture of science is now being unveiled by the growing evidence of failure to reproduce the results of most published research.  The results of new science submitted for publication today, in the vast majority of cases, conforms to researchers' preconceived expectations — and always so for those published in high end journals. This is good reason to suspect a widespread misappropriation of science.

There is a lot that needs fixing here.


References

Baker M (2015) Over half of psychology studies fail reproducibility test: Largest replication study to date casts doubt on many published positive results. Nature

Bartlett T (2015) The results of the reproducibility project are in: They’re not good. The Chronicle of Higher Education.  http://chronicle.com/article/The-Results-of-the/232695/

Begley CG, Buchan AM, Dirnagl U (2015) Robust research: Institutions must do their part for reproducibility. Nature.  http://www.nature.com/news/robust-research-institutions-must-do-their-part-for-reproducibility-1.18259?WT.mc_id=SFB_NNEWS_1508_RHBox

Collins FS, Tabak LA (2014) Policy: NIH plans to enhance reproducibility. Nature.

Hayden EC (2013) Weak statistical standards implicated in scientific irreproducibility: One-quarter of studies that meet commonly used statistical cutoff may be false. Nature.

Ioannidis JPA (2005) Why most published research findings are false.  PLoS Medine

Jump P (2015) Reproducing results: how big is the problem? Times Higher Education. https://www.timeshighereducation.co.uk/features/reproducing-results-how-big-is-the-problem?nopaging=1

Lehrer J (2010) The truth wears off: Is there something wrong with the scientific method?  The New Yorker.  http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off

Leimu R, Koricheva J (2005) What determines the citation frequency of ecological papers?  Trends in Ecology and Evolution 20: 28-32. 

Nosek et al. (2015) Promoting an open research culture.  Science 348: 1422-1425.

Sarewitz D (2015) Reproducibility will not cure what ails science. Nature.


Thursday, August 27, 2015

Evolutionary roots of tabula rasa


Standard Social Science metaphor for
the human mind as a 'blank slate' at birth
According to the 'Standard Social Science Model', the human mind at birth is a 'blank slate', or in latin, tabula rasa.  This means that our understanding of the world, and the manner in which we think and behave, are acquired only through exposure to and learning from our environment, including especially the social environment.  Variations in human behaviours then are considered to be entirely a consequence of variable environments and variation in the opportunity to learn from different environments.  In other words, we and we alone are the arbiters of our own minds; they are only what our experiences and opportunities enable them to be, and are not to any significant degree a product of genetic inheritance influenced by Darwinian selection in the ancestral past. 


In contrast, according to the 'Evolutionary Science Model', the assembly of the mind is certainly affected by variation in environment / learning experience, and profoundly so.  But it is also — because of genetic inheritance — partially and variably structured at birth.   From this pre-structure, then, the mind develops a degree of ‘prepared learning’.  "In pre-pared learning, we are innately pre-disposed to learn and thereby reinforce one option over another. We are ‘counter-prepared’ to make alternative choices, or even actively to avoid them" (Wilson 2012). Humans learn then in ways that are modulated in part by innate predispositions, and by partially instinctual responses to certain environmental cues that influence a range of variation in particular impulsive motivations and behaviours that generally rewarded the reproductive success of ancestors. 

Genetic Determinism metaphor for the
human mind as a blueprint or computer program
Importantly, this Darwinian view of the mind should not be confused with 'genetic determinism', where the mind is likened to a blueprint or a computer algorithm, with features solely determined by ‘coding’ from genes.  Instead, the human mind is like a jukebox (Tooby and Cosmides 1992), where a number of 'tracks' (genetic instructions) are already stored in the 'machine', but particular environments determine which 'buttons' get pressed to play particular tracks.  The human mind can also been likened to a colouring book (Kenrick et al. 2010), where the inner structure of pre-drawn lines (genetic instructions) interact with environmental inputs (different artists with differently coloured crayons) to determine the final phenotype of the behaviour (picture).




Accordingly, there is very little about the roles of genes and environment in the mental life of humans that reflect alternatives in a 'tug of war' (famously referred to as 'nature versus nurture').  In other words, their relationship in human evolution has been more of an inter-dependence or 'blending', and not where one has generally overwhelmed the other.  As Dobzhansky (1963) put it:  "The premise which cannot be stressed too often is that what the heredity determines are not fixed characters or traits but developmental processes.  The path which any developmental process takes is, in principle, modifiable both by genetic and by environmental variables.  It is a lack of understanding of this basic fact that is, it can safely be said, responsible for the unwilling-ness, often amounting to an aversion, of many social scientists … to admit the importance of genetic variables in human affairs."

In my new book (Aarssen 2015), coming out later this year, I propose that the 'blank slate' model of the mind, itself, may be a cultural (memetic) product of evolution by natural selection.  Being a staunch defender of the 'blank slate' may just be a modern manifestation of a deeply ingrained disposition (inherited from the ancestral past) to be a staunch defender of the 'self' — a disposition that was adaptive to ancestors because it helped to dispel the worry that one might be stuck with an intrinsically inferior 'self'.  The Evolutionary Science Model of the mind poses a threat, therefore, because it espouses that an 'inferior self' is likely to be informed, at least partially, by less-than-superior genes. Tenacious belief in the 'blank slate' then (like belief in religion) provides a buffer from the fear of having limited potential for memetic legacy  (with the latter conjured  by  self-impermanence anxiety  — explored in an earlier post).    It  also  facilitates  the  appealing
notion of a sense of 'ownership' of who you are; i.e. giving license for you — your 'inner self' rather than your physical DNA — to take personal credit for the kind of person you turned out to be, and hence the quality of memetic legacy that you have potential to leave — thus bolstering self-esteem (a handy toolkit item for promoting gene trans-mission success).  If this interpretation is correct, then the debate between the Standard Social Science ('blank slate') Model of the human mind and the Evolutionary Science Model of the human mind is misguided; in other words, the 'blank slate' view is not in conflict with Darwinian  evolution —
it is a cultural product of it.




References

Aarssen L (2015) What Are We?  Exploring the Evolutionary Roots of Our Future.  Queen's University, Kingston.

Dobzhansky T (1963) Anthropology and the natural sciences: The problem of human evolution. Current Anthropology 4: 138+146-148.

Kenrick DT, Nieuweboer S, Buunk AB (2010) Universal mechanisms and cultural diversity: replacing the blank slate with a coloring book. In Schaller M, Norenzayan A, Heine SJ, Yamagishi T, Kameda T (Eds.), Evolution, Culture and the Human Mind. Psychology Press, New York.

Tooby J, Cosmides L (1992) The psychological foundations of culture. In Barkow J, Cosmides L, Tooby J (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture, pp. 19–136. Oxford University Press, New York.

Wilson EO (2012) The Social Conquest of Earth. Liveright Publishing Corporation, New York.