Sunday, December 27, 2015

Conundrums of moral obligation

Does one have greater personal obligation to the welfare of some individuals over others?  In my recent book (Aarssen 2015), I explore this question in term of its evolutionary roots. The answer is clearly yes for most people, at least in practice, and evolu-tionary theory explains why — based on genetic relatedness.  

I care immensely about my own personal welfare because I am 100% related to myself — ancient survival instinct in action.  But one’s sense of obligation to others generally falls off with decreasing genetic relatedness.  Hence, most people are motivated to come to the aid of an offspring or a sibling in need.  Commonly, they will also offer help — although less readily — to more distantly related kin.  This bias based on kinship (nepotism) is cross-cultural, and generally accepted as moral, without debate; it is ‘hard-wired’ in the ‘selfish genes’ of humans. 

But a moral dilemma unfolds as we move further down the genetic-relatedness scale, where we encounter additional levels of in-group bias — most notably based on racial membership (racism), and further down based on species membership (speciesism).  These are all anciently evolved components of human nature because in-group bias (including other forms based on nationalism, xenophobia, enthnocentrism, religion, and other cultural worldviews) generally rewarded gene transmission success in our deep ancestral past. However, they have commonly received variable ratings in terms of morality, and the dilemma then lies in defining (or in whether it is even possible to define) an objective basis for moral standards.

On the one hand, the bounds of morality are shaped in part by social learning within local cultures.  In other words, societies have always decided for themselves what is unacceptable, immoral, and unlawful for the best interests of the ‘public good’ (and some groups have made better decisions — for their own welfare — than others).  And this is why Darwinism can never be used to justify or excuse behaviours (e.g. sexual harassment / abuse and other crimes) that violate or compromise human rights or public safety.   Moral and social responsibility always trump genetic bequeathal.

At the same time, however (thankfully), evolution has also given us what can only be described as a ‘moral instinct’.  This is because what is good for the best interests (prosperity) of the social group — e.g. in the case of certain agreed-upon moral codes of conduct — is normally contingent on this also being in the best interests of gene transmission success for at least most of the resident individuals.  In other words, traits influenced by genes have commonly shaped the cultures that people like — because they were (commonly) cultures that rewarded ancestral gene transmission success.   In pondering on morality, Darwin (1859) wrote:  “As soon as this virtue is honored and practiced by some few men, it spreads through instruction and example to the young, and eventually becomes incorporated in public opinion.”  Importantly however, humans evolved a moral instinct not in order to deliver us morality per se — only fitness.  As Alexander (1985) put it: “Although morality is commonly defined as involving justice for all people, or consistency in the social treatment of all humans, it may have arisen for immoral reasons, as a force leading to cohesiveness within human groups but specifically excluding and directed against other human groups with different interests.”

A comparison of racism versus speciesism particularly shows the confounds of defining moral standards.  Both — like nepotism — have been regarded as largely moral throughout much of human history.  But in most cultures today, racism is of course widely regarded as immoral, because it is never in the best interests of modern society. It undermines the ‘common good’, plus it conflicts with our evolved inclinations to show kindness and helpfulness towards fellow humans.  And yet, racism often still rears its ugly head. In contrast, speciesism today is still largely regarded as moral by the general public, and the philosopher might reasonably ask why — i.e. along a continuum of genetic relatedness, why should an immoral in-group bias (racism) be sandwiched between two (nepotism and speciesism) that are traditionally considered moral (e.g. see Lawlor 2012)?


Clearly the harvesting and exploitation of animals for human consumption has been a major boost to human prosperity and genetic fitness ever since our distant ancestors became hunters.  Speciesism, however has come under increasing attack in recent years over concerns about the ethical treatment of animals used in modern medical research and food production.  And a growing number of people now regard speciesism to be just as immoral as racism (e.g. http://speciesismthemovie.com/).  It will be interesting to watch how this movement unfolds, and to consider whether it has evolutionary roots. For example, has our empathic instinct evolved to become so acute that the emotion involved (perhaps partially modulated by social learning), has started recently to spill over somewhat towards the plight of individuals belonging to other species?  I wonder whether this might be connected with our uniquely human mortality salience and self-impermanence anxiety (Aarssen 2015). Cave (2014) offers an interesting perspective:

“This horror at the death of other creatures is intimately bound up with horror at the prospect of one’s own demise.  Flies come and go in countless masses, mostly beyond my sight and care.  But when something happens that causes me to empathise, to become the fly, then its death becomes terrible.  As the poet William Blake realised when he, too, carelessly squashed an insect:

     Am not I
     A fly like thee?
     Or art not thou
     A man like me?”



References

Aarssen LW (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen’s University, Kingston.

Alexander RD (1985) A biological interpretation of moral systems. Zygon 20: 3-20.

Cave S (2014) Not nothing: The death of a fly is utterly insignificant – or it’s a catastrophe. How much should we worry about what we squash? Aeon Magazine. http://aeon.co/magazine/philosophy/how-much-should-we-worry-about-death/

Darwin C (1859) On the Origin of Species. Facsimile of the first edition. Harvard University Press, Cambridge.

Lawlor R (2012) The ethical treatment of animals: The moral significance of Darwin’s theory. In, Brinkworth M, Weinert F (Eds). Evolution 2.0. Implications of Darwinism in Philosophy and the Social and Natural Sciences. Springer, Verlag.
  

Sunday, November 1, 2015

Why are we still searching for evidence of natural selection?

During the last century, especially the second half, many biologists (including myself) spent their careers in search of supporting evidence for Darwin’s theory of natural selection. It was exciting to witness the seemingly endless accum-ulation of published data from both laboratories and natural habitats everywhere, and for everything from bacteria to flowering plants to mammals.  We couldn’t get enough. Drunk on the exuberance of the ‘modern synthesis’, we all embraced an unquestioned faith in a new and exciting mantra:  “Nothing in biology makes sense except in the light of evolution” (Dobzhansky 1973).   It was true.

And so … Darwin was right:  natural selection works.
It’s everywhere.  Like gravity.

But the exuberance still has an oddly powerful hold for many evolutionists. Even more than a century and a half after publi-cation of the ‘The Origin’, they seem never to tire of hearing another new report (added to the now brimming pile) showing that natural selection works — even showing it exactly how most researchers already expect it to.  
Hypothesis:  The insects are mostly green because the lizard is better at spotting the brown ones.  'Eureka!  Let's do an experiment to show this!  We could even check on the physiology and genes involved.  It's bound to show that Darwin was right!' 

It’s like they are addicted to the buzz of another discovery ‘fix’, because they know in advance that it always delivers.  [After all, Darwin was right].  With each new testimonial for an apparent consequence of natural selection — e.g. with details of how a particular trait is adaptive, or the mechanism for how genes inform it — there is continuing praise and blessing from the fellowship. It doesn't matter that (as is often the case) essentially the same interpretation for the same trait (in different species or from different habitats) had already been published previously — all fitting neatly in line with standard predictions of accepted theory.  These testimonials take place at annual conferences, and weekly study groups and meetings with guest speakers for evolutionary biology congregations, attended faithfully by many followers within university departments everywhere.  Even without prayers and hymns, the atmosphere is not unlike a worship service — for Darwinism — with a sea of heads nodding in approval, frothing at the mouth.

The cherished convictions of the congregation are especially strengthened when the newly reported study involves a particularly elegant experimental design, an application of new technology, a novel or expensive method of data collection, or complex and rigorous data analyses.  Hallelujah!  Praise the Lord!  A new, more potent fix with a stronger buzz!  It doesn’t even seem to matter that the evidence presented often involves a study design from which alternative results or interpretations would have been implausible, or essentially impossible to obtain.  If a study does happen to turn up negative results, they normally end up in a file drawer, unpublished — because the only acceptable explanation is that the experimental design or data collection must have been improperly carried out.  [After all, Darwin was right].  And so, routinely, investigations are conducted for which the answers (positive ones) are already known in advance.  [... not a great track record for bolstering integrity and public confidence in science].

From Darwinism itself, I suppose, we should expect nothing less.  For our ancestors, identification and alignment of one’s views and beliefs with those of a growing contingent of others not only rewarded their intrinsic need for a sense of belonging and self-esteem.  It also undoubtedly provided assurance — often grounded in truth — that this conspicuous group of contemporaries was probably on to something important (without needing to know why) that ended up rewarding reproductive success.  [Today, of course, researchers know why, in terms of strategy for rewarding tenure, promotion, and status from an expanding publication record that corroborates the cherished convictions of the congregation].

But surely we need to move on in the 21st Century. We don’t need new reports showing that gravity works, or showing that the earth revolves around the sun.  Similarly, when a new report arrives showing essentially nothing but another example of how natural selection works in ways that we all could have guessed — consistent with what established theory predicts — I am so not impressed.

Is there any valuable research left to do on natural selection?  Certainly.  This happens when there are convincing reasons to hypothesize (with potential to test) that natural selection may (or does) work in ways that would be (or is) surprising, (or, surprisingly, isn't working at all) — i.e. running counter to expectations based on published theory (or without being readily predicted by that theory). It can also happen when the research is placed conspicuously in the context of contributions that have potential to address needs or goals that are important to society at large.  In many cases, the latter is easy to do, but commonly ignored (usually because it is regarded as uncool by the elites).

Please, if you must search for more evidence that Darwin was right about natural selection, at least make sure it passes the test for conspicuous and meaningful valuation branding. Otherwise, it runs a growing risk of looking like something stuck in the 20th Century.  I need regular reminders of this myself; as an ardent evolutionist, I can be as rabid as they come.


References

Dobzhansky T (1973) Nothing in biology makes sense except in the light of evolution. American Biology Teacher 35 (3): 125–129.

Saturday, October 24, 2015

Why are we sad at funerals?

Several characteristic features of human nature, social life, and culture show signs of having been shaped in part by genetic inheritance, resulting from Darwinian selection in our ancestral past.  These evolutionary roots are explored in my new book (Aarssen 2015), where an interesting example concerns the function of funeral ceremonies, and the usual human emotional responses to them.  “The cardinal fact is that all people everywhere take care of their dead in some fashion, while no animal does anything of the sort. … Only a being who knows that he himself will die is likely to be really concerned about the death of others” (Dobzhansky 1967).

Do funerals make us sad then because we will miss the deceased, or because funerals force us to confront the morbid fear of our own eventual demise? Do we embrace these rituals because it honours the life and memory of the deceased, or because it provides an effective remedy for the terrifying reminder that — like the deceased — ‘my life is also impermanent’.



Perhaps funerals then serve mainly to alleviate a crippling worry:  ‘Will I be remembered?  Will I leave something of myself for the future?’  If so, then these emotions might be best understood in terms of a uniquely human Legacy Drive.  As Pinker (1997) put it:  “Ancestor worship must be an appealing idea to those who are about to become ancestors.
Also interesting is how the sadness and anxiety evoked by such confrontations with death are routinely mitigated by domains for Leisure Drive.  When the funeral is over, people are commonly anxious to forget it as soon as possible, by intentionally seeking out pleasurable, anxiety-buffering distractions — like eating, alcohol consumption, going shopping, and having sex (Mandel and Smeesters 2007, Schultz 2008, Birnbaum et al. 2011, Goldenberg 2013).

As Albert Camus (1956) put it: “Man is the only creature who refuses to be what he is.”


References

Aarssen L (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen's University, Kingston.

Birnbaum G, Hirschberger G, Goldenberg J (2011) Desire in the face of death: Terror management, attachment, and sexual motivation. Personal Relationships 18 1-19.

Camus A (1956) The Rebel. Alfred A.Knopf, New York.

Dobzhansky T (1967) The Biology of Ultimate Concern. The New American Library, New York.

Goldenberg JL (2013) Immortal objects: The objectification of women as terror management. Objectification and (De)humanization 60: 73-95.

Mandel N, Smeesters D (2007) Shop ‘til you drop: the effect of mortality salience on consumption quantity. Advances in Consumer Research 34: 600-601.

Schultz N (2008) Morbid thoughts make us reach for the cookie jar. New Scientist 198: 12.

Pinker S (1997) How the Mind Works. Norton, New York.

Monday, September 28, 2015

Why most published data are not reproducible

Public confidence in science has suffered another setback from recent reports that the data from most published studies are not reproducible (Baker 2015, Bartlett 2015, Begley et al 2015, Jump 2015).  The implication from this, statistically, is unavoidable, and troubling to say the least:  it means that the results of at least half of all research that has ever been published, probably in all fields of study, are inconclusive at best. They may be reliable and useful, but maybe not.  Mounting evidence in fact leans toward the latter (Ioannidis 2005, Lehrer 2010, Hayden 2013).

Moreover, these inconclusive reports are likely to involve mostly those that had been regarded as especially promising contributions — lauded as particularly novel and ground-breaking.  In contrast, the smaller group that passed the reproducibility test is likely to involve mostly esoteric research that few people care about, or so-called ‘safe research’: studies that report merely confirmatory results, designed to generate data that were already categorically expected, i.e. studies that aimed to provide just another example of support for well-established theory — or if not the latter, support for something that was already an obvious bet or easily believable anyway, even without data collection (or theory).  [A study that anticipates only positive results in advance is pointless.  There is no reason for doing the science in the first place; it just confirms what one already knows must be true].    

Are there any remedies for this reproducibility problem?  Undoubtedly some, and researchers are scrambling, ramping up efforts to identify them [see Nature Special (2015) on Challenges in irreproducible research].  Addressing them effectively (if it is possible at all) will require nothing short of a complete re-structuring of the culture of science, with new and revised manuals of ‘best practice’ (e.g. see Nosek et al. 2015, Sarewitz 2015; and see Center for Open Science: Transparency and Openness Promotion (TOP) Guidelines). 


Some of the reasons for irreproducibility, however, will not go away easily.  In addition to outright fraud, there are at least six more — some more unscrupulous than others:


(1) Page space restrictions of some journals.  For some studies, results cannot be reproduced because the authors were required to limit the length of the paper.  Hence, important details required for repeating the study are missing.   







(2) Sloppy record keeping / data storage / accessibilityResearchers are not all equally meticulous by nature.  In some cases, methodological details are missing inadvertently because the authors simply forgot to include them, or the raw data were not stored or backed up with sufficient care.








(3) Practical limitations that prevent ‘controls’ for everything that might matter. For many study systems, there are variables that simply cannot be controlled.  In some cases, the authors are aware of these, and acknowledge them (and hence also the inconclusive nature of their results).  But in other cases, there were important variables that could have been controlled but were innocently overlooked, and in still other cases there were variables that simply could not have been known or even imagined.   The impact of these ‘ghost variables’ can severely limit the chances of reproducing the results of the earlier study. 


(4) Pressure to publish a lot of papers quickly. Success in academia is measured by counting papers. Researchers are often anxious, therefore, to publish a ‘minimum publishable unit’ (MPU), and as quickly as possible, without first repeating the study to bolster confidence that the results can be replicated and were not just a fluke.  Inevitably of course, some (perhaps a lot) of the time, results (especially MPUs) will be a fluke, but it is generally better for one’s career not to take the time and effort to find out (time and effort taken away from cranking out more papers).  When others do however take the time and effort to check, more incidences of irreproducible results make the news headlines — news that would be a lot less common if the culture of academia encouraged researchers to replicate their own studies before publishing them.


(5) Using secrecy (omissions) to retain a competitive edge.   As Collins and Tabak (2014) note:  “…some scientists reputedly use a 'secret sauce' to make their experiments work — and withhold details from publication or describe them only vaguely to retain a competitive edge.”






(6) Pressure to publish in ‘high end’ journals.  Successful careers in academia are measured not just by counting papers, but especially by counting papers in ‘high end’ journals — those that generate high Impact Factors because of their mission to publish only the most exciting findings, and disinterest in publishing negative findings. Researchers are thus addicted to chasing Impact Factor (IF) as a status symbol within a culture that breeds elitism — and the high end journals feed that addiction (many of them while cashing in on it). The traditional argument for defending the value of 'high-end branding' for journals (supposedly measured by high IF) is that it provides a convenient filtering mechanism allowing one to quickly find and read the most significant research contributions within a field of study.  In fact, however, the IF of a ‘high-end’ journal says very little to nothing about the eventual relative impact (citation rates) for the vast majority of papers published in it (Leimu and Koricheva 2005). A high journal IF, in most cases, is driven by publication of only a small handful of 'superstar' articles (or articles by a few 'superstar' authors). Journal 'brand' (IF) therefore has only marginal value at best as a filtering mechanism for readers and researchers.  

Moreover, addiction to chasing Impact Factor, despite not delivering what its gate-keepers proclaim, is ironically at the heart of the irreproducibility problem — for at least two reasons:  First, it fuels incentives for researchers to be biased in the selection of study material (e.g. using a certain species) that they already have reason to suspect, in advance, is particularly likely to provide support for the 'favoured hypothesis’ — the 'exciting story'.  Any data collected for different study material that fail to support the 'exciting story’ must of course be shelved — the so called ‘file drawer’ problem — because high end journals won’t publish them.

Second, addiction to chasing IF can motivate researchers to report their findings selectively, excluding certain data or failing to mention the results of certain statistical analyses that do not fit neatly with the ‘exciting story’.  This may, for example, include ‘p-hacking’ — searching for and choosing to emphasize only analyses that give small p-values.  And obviously there is no incentive here to repeat one’s experiment, ‘just to be sure’; self-replication would run the risk that the ‘exciting story’ might go away.
  


All of this means that the research community and the general public are commonly duped — led to believe that support for the 'exciting story’ is stronger than it really is.  And this is revealed when later research attempts, unsuccessfully, to replicate the effect sizes of earlier supporting studies.  Negative findings in this context then, ironically, become more ‘publishable’ (including for study material that was already used earlier and that ended up in a file drawer somewhere).  Hence, empirical support for an exciting new idea commonly accelerates rapidly at first, but eventually starts to fall off ('regression to the mean') as more replications are conducted — the so called ‘decline effect’ (Lehrer 2010).

The progress of science happens when research results reject a null hypothesis, thus supporting the existence of a relationship between two measured phenomena, or a difference among groups — i.e. a ‘positive result’.  But progress is also supposed to happen when research produces a 'negative result' — i.e. results that fail to reject a null hypothesis, thus failing to find a relationship or difference.  Science done properly then, with precision and good design but without bias, should commonly produce negative results, even perhaps as much as half of the time.  But negative results are mostly missing from published literature.  Instead, they are hidden in file drawers, destroyed altogether, or they never have a chance of being discovered.  Because positive results are more exciting to authors, and especially journal editors, researchers commonly rig their study designs and analyses to maximize the chances of reporting positive results.

The absurdity of this contemporary culture of science is now being unveiled by the growing evidence of failure to reproduce the results of most published research.  The results of new science submitted for publication today, in the vast majority of cases, conforms to researchers' preconceived expectations — and always so for those published in high end journals. This is good reason to suspect a widespread misappropriation of science.

There is a lot that needs fixing here.


References

Baker M (2015) Over half of psychology studies fail reproducibility test: Largest replication study to date casts doubt on many published positive results. Nature

Bartlett T (2015) The results of the reproducibility project are in: They’re not good. The Chronicle of Higher Education.  http://chronicle.com/article/The-Results-of-the/232695/

Begley CG, Buchan AM, Dirnagl U (2015) Robust research: Institutions must do their part for reproducibility. Nature.  http://www.nature.com/news/robust-research-institutions-must-do-their-part-for-reproducibility-1.18259?WT.mc_id=SFB_NNEWS_1508_RHBox

Collins FS, Tabak LA (2014) Policy: NIH plans to enhance reproducibility. Nature.

Hayden EC (2013) Weak statistical standards implicated in scientific irreproducibility: One-quarter of studies that meet commonly used statistical cutoff may be false. Nature.

Ioannidis JPA (2005) Why most published research findings are false.  PLoS Medine

Jump P (2015) Reproducing results: how big is the problem? Times Higher Education. https://www.timeshighereducation.co.uk/features/reproducing-results-how-big-is-the-problem?nopaging=1

Lehrer J (2010) The truth wears off: Is there something wrong with the scientific method?  The New Yorker.  http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off

Leimu R, Koricheva J (2005) What determines the citation frequency of ecological papers?  Trends in Ecology and Evolution 20: 28-32. 

Nosek et al. (2015) Promoting an open research culture.  Science 348: 1422-1425.

Sarewitz D (2015) Reproducibility will not cure what ails science. Nature.


Thursday, August 27, 2015

Evolutionary roots of tabula rasa


Standard Social Science metaphor for
the human mind as a 'blank slate' at birth
According to the 'Standard Social Science Model', the human mind at birth is a 'blank slate', or in latin, tabula rasa.  This means that our understanding of the world, and the manner in which we think and behave, are acquired only through exposure to and learning from our environment, including especially the social environment.  Variations in human behaviours then are considered to be entirely a consequence of variable environments and variation in the opportunity to learn from different environments.  In other words, we and we alone are the arbiters of our own minds; they are only what our experiences and opportunities enable them to be, and are not to any significant degree a product of genetic inheritance influenced by Darwinian selection in the ancestral past. 


In contrast, according to the 'Evolutionary Science Model', the assembly of the mind is certainly affected by variation in environment / learning experience, and profoundly so.  But it is also — because of genetic inheritance — partially and variably structured at birth.   From this pre-structure, then, the mind develops a degree of ‘prepared learning’.  "In pre-pared learning, we are innately pre-disposed to learn and thereby reinforce one option over another. We are ‘counter-prepared’ to make alternative choices, or even actively to avoid them" (Wilson 2012). Humans learn then in ways that are modulated in part by innate predispositions, and by partially instinctual responses to certain environmental cues that influence a range of variation in particular impulsive motivations and behaviours that generally rewarded the reproductive success of ancestors. 

Genetic Determinism metaphor for the
human mind as a blueprint or computer program
Importantly, this Darwinian view of the mind should not be confused with 'genetic determinism', where the mind is likened to a blueprint or a computer algorithm, with features solely determined by ‘coding’ from genes.  Instead, the human mind is like a jukebox (Tooby and Cosmides 1992), where a number of 'tracks' (genetic instructions) are already stored in the 'machine', but particular environments determine which 'buttons' get pressed to play particular tracks.  The human mind can also been likened to a colouring book (Kenrick et al. 2010), where the inner structure of pre-drawn lines (genetic instructions) interact with environmental inputs (different artists with differently coloured crayons) to determine the final phenotype of the behaviour (picture).




Accordingly, there is very little about the roles of genes and environment in the mental life of humans that reflect alternatives in a 'tug of war' (famously referred to as 'nature versus nurture').  In other words, their relationship in human evolution has been more of an inter-dependence or 'blending', and not where one has generally overwhelmed the other.  As Dobzhansky (1963) put it:  "The premise which cannot be stressed too often is that what the heredity determines are not fixed characters or traits but developmental processes.  The path which any developmental process takes is, in principle, modifiable both by genetic and by environmental variables.  It is a lack of understanding of this basic fact that is, it can safely be said, responsible for the unwilling-ness, often amounting to an aversion, of many social scientists … to admit the importance of genetic variables in human affairs."

In my new book (Aarssen 2015), coming out later this year, I propose that the 'blank slate' model of the mind, itself, may be a cultural (memetic) product of evolution by natural selection.  Being a staunch defender of the 'blank slate' may just be a modern manifestation of a deeply ingrained disposition (inherited from the ancestral past) to be a staunch defender of the 'self' — a disposition that was adaptive to ancestors because it helped to dispel the worry that one might be stuck with an intrinsically inferior 'self'.  The Evolutionary Science Model of the mind poses a threat, therefore, because it espouses that an 'inferior self' is likely to be informed, at least partially, by less-than-superior genes. Tenacious belief in the 'blank slate' then (like belief in religion) provides a buffer from the fear of having limited potential for memetic legacy  (with the latter conjured  by  self-impermanence anxiety  — explored in an earlier post).    It  also  facilitates  the  appealing
notion of a sense of 'ownership' of who you are; i.e. giving license for you — your 'inner self' rather than your physical DNA — to take personal credit for the kind of person you turned out to be, and hence the quality of memetic legacy that you have potential to leave — thus bolstering self-esteem (a handy toolkit item for promoting gene trans-mission success).  If this interpretation is correct, then the debate between the Standard Social Science ('blank slate') Model of the human mind and the Evolutionary Science Model of the human mind is misguided; in other words, the 'blank slate' view is not in conflict with Darwinian  evolution —
it is a cultural product of it.




References

Aarssen L (2015) What Are We?  Exploring the Evolutionary Roots of Our Future.  Queen's University, Kingston.

Dobzhansky T (1963) Anthropology and the natural sciences: The problem of human evolution. Current Anthropology 4: 138+146-148.

Kenrick DT, Nieuweboer S, Buunk AB (2010) Universal mechanisms and cultural diversity: replacing the blank slate with a coloring book. In Schaller M, Norenzayan A, Heine SJ, Yamagishi T, Kameda T (Eds.), Evolution, Culture and the Human Mind. Psychology Press, New York.

Tooby J, Cosmides L (1992) The psychological foundations of culture. In Barkow J, Cosmides L, Tooby J (Eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture, pp. 19–136. Oxford University Press, New York.

Wilson EO (2012) The Social Conquest of Earth. Liveright Publishing Corporation, New York.



Tuesday, July 28, 2015

Evolution of the 'childfree' culture

In the middle of the last century, Pulitzer Prize winning American author Phyllis McGinley (1956) wrote:  “Women are the fulfilled sex.  Through our children we are able to produce our own immortality, so we lack that divine restlessness which sends men charging off in pursuit of fortune or fame or an imagined Utopia.”  

A similar notion is reflected in Murray’s (2003) commentary on why women comprise such a small percentage of the highly accomplished people in recorded history: “So closely is giving birth linked to the fundamental human goal of giving meaning to one’s life that it has been argued that, ultimately, it is not so much that motherhood keeps women from doing great things outside the home as it is men’s inability to give birth that forces them to look for substitutes.” 

As Sir Francis Bacon (1561-1626) put it:  The perpetuity by generation is common to beasts; but memory, merit, and noble works are proper to men.  And surely a man shall see the noblest works and foundations have proceeded from childless men, which have sought to express the images of their minds, where those of their bodies have failed.  So the care of posterity is most in them who have no posterity" (Bacon 1985). 




But the long history of patriarchal subju-gation of women (Joshi 2006) has surely been at least equally significant in keeping women "from doing great things outside the home." This is evident from a dramatic cultural shift that began to unfold around the middle of the last century:  women worldwide managed, in greater frequencies, to break free from this oppression, and thus became more empowered to decide for themselves whether they will have children, and how many.

With this freedom, many women elected to have no children at all — choosing to abandon the uniquely female opportunity for symbolic immortality through motherhood.








They chose instead to seek 'fulfillment'
from a different domain for legacy —
personal accomplishment — as well as
in domains for leisure.

Both were largely denied to
their oppressed maternal ancestors,
historically controlled mostly by men for men.

























Cover of Time magazine, August 2013
These women are the architects of the modern ‘childfree culture’ (Shorto 2008, Frazer 2011, Sandler 2013, Kingston 2014). Their partners of course also play a role. But because women are now, for probably the first time in history, widely and significantly in control of their own fertility, they are principally in charge of the agenda and momentum of the childfree culture.  On the surface, it has all the symp-toms of a Darwinian paradox. But digging deeper reveals that it may actually have evolutionary roots.  In terms of ‘fulfillment’, it may be understood more directly from a historically male perspective.  In other words, it is men who have been the ‘unfulfilled sex’ — possibly in part because of their inability to give birth, but perhaps more because of paternal uncertainty (thus accounting for their "charging off" in pursuit of symbolic immor-tality through "fortune or fame").

Plus, partly in order to minimize paternal uncertainty, many or most women — throughout most of recorded history — were essentially forced, by patriarchal subjugation and/or religious im-peratives (also controlled by men), to bear offspring (often many) regardless of whether they had any intrinsic desire to be hard-working mothers, or mo-thers at all.  Presumably, many didn’t.


Our 'unfulfilled' male ancestors thus coerced female ancestors — many of whom happened not to have a particularly strong, so called, 'maternal instinct' (or any at all) — to (nevertheless) bear offspring, including daughters who inherited their mothers’ weak maternal instincts.  This presents an intriguing evolutionary hypothesis:  women of the ‘childfree’ culture today, or significant numbers of them at least, may be descendants of these daughters (Aarssen and Altman 2012).

Cover of MacLean's magazine, May 28, 2007
The childfree culture, therefore, may be a product of what we might call a ‘failed disfavouring selection’ hypothesis — i.e., unlike many traits that can be interpreted as a consequence of being favoured by natural selection in the ancestral past, attraction to a childfree lifestyle may instead be a consequence of not having been disfavoured.

Importantly, however, things may be about to change dramatically: Choosing to be childfree — as women are increasingly free to do (as they should be) — means zero gene transmis-sion through direct lineage.  Accordingly, selection against weak maternal instinct (or weak 'parenting drive' — see below) may soon be ramping up (Aarssen 2007). If so, then might this selection, within say a generation or two, spell an abrupt end to the childfree culture?



The childfree culture is one of the reasons that world population growth rate started to decline near the end of the last century — a welcoming trend for a crowded planet with “too many people and too much stuff” (Ehrlich and Ehrlich 2008). An important question is whether human birth rate might continue to drop voluntarily and substantially in the coming decades — both in developed countries, where more and more women are embracing small families and the childfree culture, and in less developed countries, where empow-erment for women is gaining more and more momentum (Engelman 2010). But this may be unlikely to unfold so neatly, and maybe not at all — for one fairly obvious reason:  The parents of the future will not be products of the childfree culture. Instead, many of them will be the descendants of women today who are choosing freely and anxiously to raise children, and perhaps especially those who are anxious to raise a lot of them.

Natural selection never limits the reproductive success of resident individuals of any species in order to minimize the collective misery of over-population.  Echoing Charles Galton Darwin (1953), Theodosius Dobzhansky (1962) warned, over half a century ago:

“Reduction of the birth rates is necessary if the population growth is to be contained.  Family planning and limitation are not, however, likely to be undertaken by everybody simultaneously.  Those who practice such controls will contribute to the following generations fewer genes in proportion to their number than those who do not.  Fewer and fewer people will, therefore, be inclined to limit their families as the generations roll by.  The human flood, rising higher and higher, will overwhelm a multitudinous but degenerate mankind.  The assumption implicit in this argument is, of course, that the craving for perpetuation of one’s seed is uncontrollable by reason and education, and that people will go on spawning progeny, even knowing that it is destined to be increasingly miserable.”   

 
There are already signs that this effect may be unfolding now — in part, I suggest, because of selection that has only recently been allowed to favour a strong 'parenting drive' (by disfavouring weak parenting drive).   


‘Parenting drive’ is defined here as an intrinsic attraction (informed in part by genetic inheritance) to memetic legacy through influence on offspring, but one that is also heavily infused with intrinsic attraction to a particular kind of pleasure reward at the same time — triggered specifically (odd as it may seem to some) by the hard work of parenting.  Research has shown that there is often a sense of ‘meaning’ in purposeful toil and mundane routine (Baumeister et al. 2013).  Pre-occupation with hard work (especially when purposeful) leaves little time for worry or depressing thoughts, including those rooted in self-impermanence anxiety.  A distracting pleasure then — like that of leisure — is obtained from just staying busy.  And the ‘busy work’ of parenting is available in greater abundance, of course, with increasing family size (Angeles 2010, Nelson et al. 2013).

A strong ‘parenting drive’ can be seen in recent pro-natalist movements involving attraction to large family size — typically supported by wealth, and combined often with religion (e.g. the ‘Quiverfull’ movement; Joyce 2010), but not in all cases (Brooks 2004, Kaufman 2010, Rowthorn 2011, Caplan 2012).  Importantly, prior to the women's liberation movement of the last century, weak parenting drive (despite it’s obvious disadvantage for evolutionary fitness), probably never had widespread opportunity to be strongly disfavoured by natural selection.  But today this opportunity is growing rapidly and globally, as women everywhere are becoming fully empowered, like never before, to take control of their sexuality and fertility, as well as their livelihood and personal goals.

These freedoms for women are of course cause for great celebration, and are long overdue. Is it possible, however, that without coercive measures to limit the number of births per female, contemporary pro-natalist cultures (e.g. like ‘Quiverfull’) could — because of Darwinian selection — soon displace the childfree culture?  And could this result in rising birth rates in the coming decades?   If so, a collapsing civilization (Ehrlich and Ehrlich 2013, Bradshaw and Brook 2014 ) will undoubtedly look more certain than it already does.



References

Aarssen LW (2007) Some bold evolutionary predictions for the future of mating in humans. Oikos 116: 1768-78.

Aarssen LW, Altman T (2012) Fertility preference inversely related to ‘legacy drive’ in women, but not men: Interpreting the evolutionary roots, and future, of the ‘childfree’ culture.  The Open Behavioral Science Journal 6: 37-43. 

Angeles L (2010) Children and life satisfaction. Journal of Happiness Studies 11: 523-538.

Bacon F (1985) Francis Bacon: The Essays. Penguin, London.

Baumeister RF, Vohs KD, Aaker JL, Garbinsky EN (2013) Some key differences between a happy life and a meaningful life.  The Journal of Positive Psychology 8: 505–516.

Bradshaw CJA, Brook BW (2014) Human population reduction is not a quick fix for environmental problems. Proceedings of the National Academy of Science 111: 16610-16615.

Brooks D (2004) The New Red-Diaper Babies. New York Times.

Caplan B (2012) Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think. Basic Books, New York.

Dobzhansky T (1962) Mankind Evolving: The Evolution of the Human Species. Yale University Press, New Haven.

Ehrlich P, Ehrlich A (2008) The problem is simple: too many people, too much stuff. http://www.alternet.org/story/94268/the_problem_is_simple%3A_too_many_people%2C_too_much_stuff.

Ehrlich PR, Ehrlich AH (2013) Can a collapse of global civilization be avoided? Proceedings of the Royal Society B  280:  http://dx.doi.org/10.1098/rspb.2012.2845

Engelman R (2010) More: Population, Nature, and What Women Want. Island Press, Washington DC.

Frazer B (2011) The No-Baby Boom: A growing number of couples are choosing to live child-free. And you might be joining their ranks. Details, April 2011. http://www.details.com/culture-trends/critical-eye/201104/no-baby-boom-non-breeders

Joshi ST, ed (2006) In Her Place:A Documentary History of Prejudice Against Women. Prometheus Books, Amherst NY.

Joyce K (2010) Quiverfull: Inside the Christian Patriarchy Movement.  Beacon Press, Boston.

Kaufmann E (2010) Shall the Religious Inherit the Earth?  Demography and Politics in the Twenty-First Century. ProfileBooks, London.

Kingston A (2014) The no-baby boom: Social infertility, baby regret and what it means that shocking numbers of women aren’t having children.  MacLean’s magazine, March, 2014.

McGinley P (1956) Women are wonderful: They like each other for all the sound, sturdy virtues that men do not have. Life magazine, 24 Dec 1956.    

Murray C (2003) Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 BC to 1950. New York: HarperCollins.

Nelson SK, Kushlev K, English T, Dunn EW, Lyubomirsky S (2013) In defense of parenthood: Children are associated with more joy than misery. Psychological Science 24: 3–10.

Rowthorn R (2011) Religion, fertility and genes: a dual inheritance model. Proceedings of the Royal Society B 278: 2519–2527.

Sandler L (2013) Having It All Without Having Children. Time Magazine, August 2013.



  

Friday, June 26, 2015

The evolution of cool

For my new book (Aarssen 2015; coming out later this year), I’ve been thinking lately about the culture of ‘cool’, and in particular whether it can be understood in terms of probable evolutionary roots. The term ‘cool’ has assumed several connotations over a long history of social evolution, commonly referring to characteristics of people, but also, in some cases, things or situations. But one version in particular has endured, and remains conspicuous today: the person who is regarded as ‘cool’ because of a distinctive attitude — represented in certain ways of walking or speaking, or certain types of facial expressions or gestures (sometimes with props/accessories like sunglasses or cigarettes) — that evokes a demeanor of composure, self-confidence, and nonchalance towards situations where, normally, excitement or emotional vulnerability would be expected.

This is usually also accompanied by a distinctive appearance or presentation — e.g. reflected in certain styles of dress or grooming (e.g. hair, beard), certain choices for hobbies, or certain 'tastes' (e.g. in music, preferred restaurants, or home furnishings) — that evokes an impression of being impervious to the sway of popular fashions and conventions. These have been the trademarks of the beatniks, bohemians, eclectics, freethinkers, eccentrics, hipsters, and elitists.

The question is: Why are we impressed with this? What interests me is whether it might be rooted in our intrinsic attraction to anything that points to potential for defying inevitable death — an apparent indifference to awareness of self-impermanence.
Do people admire coolness and want to be like cool people because this signals a talent that could be deployed for dismissing or buffering the normally crippling private anxiety of mortality salience?
Did our ancestors want to be around cool people (including as a potential mate) because they thought that this talent might somehow rub off on them?

Perhaps coolness served our distant ancestors by ‘announcing’ truthfully (to potential adver-saries and potential mates), an unflappable superiority: “My talents for poise and emotional control have such high quality that nothing fazes me; I can handle any challenge with dispassionate ease.” The embedded message here is that this includes the challenge of responding effectively to the universal human terror from the ‘curse of consciousness’: “to know that one is food for worms”, as Becker (1973) put it. According to this hypothesis then, potential mates that were authentically cool were not only socially popular; they were sexually attractive and probably also a good bet for being equipped to provide for (and to favourably inspire) one’s offspring — all good things for gene transmission success.

Natural selection (and hence cultural selection shaped in part by natural selection) in our ancestral past, therefore, may have favoured (probably especially in males) dispositions (informed by genetic inheritance) for public mannerisms and styles of many sorts that projected a confident, self-determining, calm, and collective persona, with an elite knowingness — as truthful advertisements of these personifications. This would have also required of course the evolution of capacity to accurately detect and correctly interpret these advertisements in others. And at the same time, therefore, the evolution of strategies for deception would have been inevitable. The latter, much of the time, can reward reproductive success just as well as the ‘real deal’.

Today therefore, the ‘cool guy’, much of the time, is likely to be a fake, a complete scam —
a deliberate deception designed to attract the notice of others, serving only to bolster one’s self-esteem (a handy tool-kit item, of course, for buffering self-impermanence anxiety). Like the mostly female culture of cosmetics, and the largely male culture of conspicuous wasteful consumerism, the culture of ‘cool’, one might expect then, is commonly also a false advertisement. In other words, the cosmetics user is often not really as young as she appears, the young male who buys expensive fast cars (that he can’t afford) is often not really as rich as he appears, and the ‘cool guy’ whose countercultural style screams — “I’m too cool to care about convention or the latest trends that the masses follow” — is often just very concerned about appearing ‘cool’, and very talented (and concerned) about concealing that concern.

The latter is plainly evident in some men whose clothing makes a statement, but not because it is flamboyant or expensive. Just the opposite; instead, it looks understated for the venue relative to the average or ‘standard’ expectation — e.g. because it looks (sometimes ever so subtly) like it is past its best-before date, or because it is minimalist (e.g. a clean but plain undershirt, usually white, grey or black, and untucked or half-tucked), or because it is a style from an earlier decade. On the surface of course, the ‘cool guy’ here — despite obviously being able to afford the latest fashion — looks like he just can’t be bothered to make any significant effort in deciding what should be in his wardrobe. After all, only ‘cool guys’ can get away with that.



But nothing could be further from the truth; the wardrobe was cleverly and carefully chosen — including even with intentional visits to the used clothing store — in order to give the appearance that it was not carefully chosen.


And so, just as with cosmetics and expensive fast cars, contrived ‘coolness’ will fool some of the people some of the time. The interesting and entertaining thing here lies in watching how the contrived ‘cool’ need to continually reinvent themselves, as the masses discover and copy what is ‘cool’ (thus rendering it no longer cool) — and in trying to decipher who these many pretentious fakes are, lurking in our midst. [It’s easier to spot them, when you’ve been one yourself].


References

Aarssen LW (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen’s University, Kingston.

Becker E (1973) The Denial of Death. Simon and Schuster, New York.

Thursday, May 28, 2015

Wherefore the curse of consciousness?


Around 40—50 thousand years ago, but possibly earlier, our ancestors started to become equipped for profound enlightenment:  they discovered a sense of time, and got a glimpse of self-awareness. With consciousness (variously called, 'theory of mind', the 'human spark', and the 'mind’s big bang'), our species began to interpret self in relation to the passage of time, and in relation to a recognition of self-awareness in others. With this came capacity to plan for the future, and to envisage an existence of unseen others and events from the past that was understood to have been just as ‘real’ as the present. Combined with growing powers of reasoning, computation, curiosity, insight, imagination, and memory, it meant being able to use abstract/symbolic thinking to see beyond the actual to the possible, to anticipate the thoughts and actions of others, and to deliberately control one’s behaviour. With these cognitive skills, Homo sapiens was launched on a trajectory of genetic and cultural evolution unlike any other in the history of life — sometimes called the 'great leap forward'. Our predecessors who lacked them did not become our ancestors. 


But all of this came with an emotional cost: anxiety about one’s imagined, eventual mor-tality at some unknown time in the future. The uniquely human capacity to foresee one’s own eventual death, normally evokes a well-spring of terror — the 'curse of consciousness'. It starts in early childhood and extends cross-culturally. Numerous studies in social psychol-ogy involving 'terror management theory' have shown how death reminders commonly evoke a wide range of behaviours associated with "world-view and esteem defense and striving" (Burke et al. 2010).



Many critical questions, however, remain unanswered:  Where did this terror come from? Why are humans so primed to feel it?  Was 'eventual mortality' anxiety just a by-product of something else favoured by natural selection?   Was it linked to a cost-benefit trade-off? How did our ancestors cope with it?  Did they just put up with it as best they could?  Why did natural selection not eliminate this seemingly maladaptive cognitive domain?  Or was it perhaps never really maladaptive at all?  Did natural selection play a role in shaping motivational and behavioural responses to it, in ways that facilitated deployment of 'anxiety buffers'?  Or did natural selection somehow turn the emotional cost of this anxiety into a fitness benefit?

In my new book, to be published later this year (Aarssen 2015), I explore possible answers to these questions. Below is an excerpt from the book, describing three plausible evolutionary hypotheses, where 'eventual mortality' anxiety can be interpreted — in terms of genetic fitness — as maladaptive, neutral, or adaptive.

(i) 'Eventual mortality' anxiety is just ancient 'survival instinct' gone awry, misemployed by a fitness trade-off cost of consciousness — i.e. maladaptive in terms of genetic fitness. 


According to this hypothesis, time- and self-awareness gave us knowledge of eventual mortality, which automatically deploys our primitive survival instinct, thus triggering its usual emotional response: anxiety.  But this anxiety, and hence the consciousness that caused it, imposed a fitness cost — one that was worth paying, because the fitness benefits of consciousness were greater.  Survival instinct in other mammals is mostly about responding with 'fight, flight, or freeze' (accompanied by fear) to perception of a looming danger (e.g. attack from a predator or a rival) — or responding with frantic (fearful) desperation to an immediate or impending shortage of an essential resource (e.g. starvation). Importantly, these mortality risks all involve physical pain (from injury or hunger), and the above responses of course also characterize expressions of traditional Survival Drive in humans. The crucial question here, however, is this:  Is it possible that deployment of survival instinct in humans (because we can imagine ourselves in the future) need not (as for other mammals) require an immediate or imminent threat to continued existence?  In other words, did humans inherit a survival instinct so overpowering that it also manifests as fear even in response to events that we know will only eventually happen, like death?  The answer, according to this hypothesis, is yes — i.e. humans and humans alone have a survival instinct so acute that it routinely compels us to be anxious about our own death, even though it can only be imagined, as an eventuality, sometime in the future, resulting (if young) even in the distant future from just ordinary old age, and even peacefully without violence, injury or even pain — AND even when all of this resides in the mind only subconsciously.  If this is true, the emotional cost of this anxiety may or may not — as a trade-off of consciousness — also have imposed a genetic fitness cost for our ancestors.  But if it did (i.e. impose  a cost that partially compromised the fitness benefit of time- and self-awareness), then we might expect natural selection to have favoured cognitive domains like Leisure Drive that — through distractions — at least partially mitigated the anxiety.

(ii) 'Eventual mortality' anxiety is/was just a neutral by-product of 'fear of the unknown' — i.e. neutral in terms of genetic fitness. 


In this case, the anxiety had an emotional cost only, without im-posing any significant fitness cost (or benefit). Imagining a non-vio-lent and painless death, happen-ing sometime eventually, in our advancing years is just as much a personal mystery today as it was for our ancestors. We have no idea what the experience will be like, or even whether there will be anything to experience at all.  Eventual mortality then can be considered as just one item in a list of several unknown and unseen things that provide no clue about what they are — like the quiet dark. Behind some of these, however, at least some of the time (but regularly enough in the experience of ancestors), danger was lurking — maybe a predator or an ambush by compet-itors. Accordingly, humans apparently evolved a general all-purpose, hard-wired, instinct-ual caution, and sometimes fear (despite its emotional cost), regarding anything and everything that couldn’t be understood, couldn’t be sensed, or couldn’t be predicted, because this was, on average, good for gene transmission success in ancestors.  Importantly, however, this differs from the survival instinct triggered by known hazards (like lack of food, or an attacker that is in plain view, or that makes a familiar sound revealing its presence nearby). Much or most (and in some cases virtually all) of the time, when ancestors were confronted with a mysterious unknown, there was really no danger 'lurking in the dark' at all.  The latter, according to this hypothesis, was true all of the time with respect to ‘eventual mortality’ anxiety.  In other words, a general anxiety ('just in case') about things unknown was undoubtedly adaptive for our ancestors, but the specific anxiety about one’s imagined eventual death, at some unknown time in the future, never was. And neither was it maladpative [but even if it was, then again, we might expect natural selection to have favoured cognitive domains like Leisure Drive, serving to buffer the anxiety]. This is whimsically captured in the definition of life from Bierce (2011): "LIFE, n. A spiritual pickle preserving the body from decay. We live in daily apprehension of its loss; yet when lost, it is not missed." 

(iii) 'Eventual mortality' anxiety was directly favoured by natural selection — i.e. adaptive in terms of genetic fitness. 


In this case, the anxiety itself directly promoted gene trans-mission success in ancestors. This seems counter-intuitive at first, but according to this hypothesis, the anxiety here is not really associated with the eventual ex-perience of literal death; it is associated more directly with what eventual death imposes: self-impermanence.  Hence, it is not really rooted in what is traditionally understood as a survival instinct, or 'survival drive'.  Self-impermanence anxiety is about worrying that life is absurd — pointless and meaningless — not just because time brings eventual death, but more specifically because (in bringing eventual death) time inevitably annihilates all that we do, and all that we are.  Self-impermanence anxiety then can be buffered by Leisure Drive, but also by Legacy Drive — i.e. a drive to leave (despite knowledge of inevitable mortality) something of oneself — a legacy — for the future. Legacy Drive then essentially 'comes to terms' with mortality salience. And yet, at the same time, it is always just a delusion. Consider that today, for every deceased human that has ever existed (save for a miniscule micro-fraction), it is as though s(he) never did.   Only genes have legacy (Dawkins 1989). But Legacy Drive has a powerful hold on human nature nonetheless, because (according to this hypothesis) it has evolutionary roots in an ancestral attraction to 'memetic legacy' through offspring; i.e. through feeling a sense that one can create a lasting 'carbon-copy' of self by shaping the minds of one's offspring — to instill within them the same things (the ideas, values, beliefs, ego, self-image/esteem, personality, and virtue of character) that define who you are.  This is also mostly a delusion, with parents easily fooled (e.g. see Harris 1998).  But importantly, the reward nevertheless goes to gene transmission because offspring are the very vehicles of genetic legacy, including for genes that might inform Legacy Drive (as well as Leisure Drive) (Aarssen 2007, 2010). As Barash (2012) put it, "Maybe awareness of mortality isn’t merely a tangential consequence of consciousness but its primary adaptive value, if it has the effect of inducing people to seek yet another way of rebelling against mortality: by reproducing."

Humans are not as smart as they often think they are.  We are easily fooled, distracted, and deluded.  Our motivations did not evolve to deliver us truth — only fitness.  


References

Aarssen LW (2007) Some bold evolutionary predictions for the future of mating in humans. Oikos 116: 1768-78.

Aarssen LW (2010) Darwinism and meaning. Biological Theory 5: 296–311.

Aarssen LW (2015) What Are We? Exploring the Evolutionary Roots of Our Future. Queen’s University, Kingston.

Barash DP (2012) Homo Mysterious: Evolutionary Puzzles of Human Nature. Oxford University Press, New York.

Bierce A (1911) The Devil’s Dictionary. http://www.thedevilsdictionary.com

Burke BL, Martens A, Faucher EH (2010) Two decades of terror management theory: A meta-analysis of mortality salience research. Personality and Social Psychology Review 14: 155-195.

Dawkins R (1989) The Selfish Gene, rev. ed. Oxford University Press, Oxford.

Harris JR (1998) The Nurture Assumption: Why Children Turn Out the Way They Do. Simon & Schuster, New York.

Follow by Email