Instead, I’ll present some of the key findings from a relatively new (April 2016) review article about the science of sexual orientation by JM Bailey and colleagues in the journal Psychological Science in the Public Interest. This is by far one of the most comprehensive and most even handed review articles written on the subject. The authors take an extremely academic approach because let’s face, the science surrounding sexual orientation has been used and abused by both pro- and anti- gay rights folks. (note: this article does not really discuss with transgenderism or gender identity issues)
This article is too long to go into all the details so instead I’m just going to present the main highlights that I prepared for a research report a few months back. Enjoy!
Political controversies pertaining to the acceptance of non-heterosexual (lesbian, gay, bisexual) orientation often overlap with controversies surrounding the science of sexual orientation. In an attempt to clarify the erroneous use of scientific information from both sides of the debate, this article 1) provides a comprehensive review of the current science of sexual orientation, and 2) considers the relevance of scientific findings to political discussions on sexual orientation.
Top Takeaways from the Review:
The scientific evidence strongly supports non-social versus social causes of sexual orientation.
The science of sexual orientation is often poorly used in political debates but scientific evidence can be relevant to specific, limited number of issues that may have political consequences.
The scientific evidence strongly supports non-social versus social causes of sexual orientation (nature vs nurture).
Prevalence of non-heterosexual orientation (analysis of 9 large studies): 5% of U.S. adults.
Summary of the major, scientifically well-founded findings supporting non-social causes:
Gender non-conformity during childhood (before the onset of sexual attraction) strongly correlates with non-heterosexuality as an adult.
Occurrence of same-sex behavior has been documented in hundreds of species and regular occurrence of such behavior in a few species (mostly primates, sheep).
Reported differences in the structure of a specific brain region (SDN-POA) between heterosexual and homosexual men.
Hormone-induced changes in the SDN-POA during development in animal studies and subsequent altered adult sexual behavior (the organizational hypothesis).
Reports of males reared as females but who exhibit heterosexual attractions as adults.
Twin studies suggest only moderate genetic/heritable influence on sexual orientation.
Several reports identify a region on the X chromosome associated with homosexuality.
The most consistent finding is that homosexual men tend to have a greater number of biological older brothers than heterosexual men. (fraternal-birth-order effect)
The science of sexual orientation is often poorly used in political debates, but scientific evidence can be relevant to a specific, limited number of issues that may have political consequences.
The question of whether sexual orientation is a “choice” is logically and semantically confusing and cannot be scientifically proven. It should not be included in political discussions.
Examples of scientifically reasonable questions include:
Is sexual orientation determined by non-social (genetic/hormonal/etc.) or social causes? (nature vs nurture)
Is sexual orientation primarily determined by genetics or environment?
Specific cases in which scientific evidence can be used to inform political decisions:
The belief that homosexual people recruit others to homosexuality (recruitment hypothesis). This type of belief was espoused by by President Museveni of Uganda in 2014 and was used to justify Uganda’s notorious anti-homosexuality bill (since repealed).
No studies exist that provide any type of evidence in support of this hypothesis.
One of the great questions in the addiction field is why do some people become full-blown addicts while other people can use drugs occasionally without progressing to anything more serious? One part the answer definitely has to do with the drug itself. For example, heroin causes a more intensely pleasurable high than cocaine and people that try heroin are more likely to become addicted to it than cocaine. But that’s not the whole story.
I’ve written previously about how a negative, stressful environment can have long-lasting negative impacts on the development of a child’s brain (also known as early-life stress of ELS). ELS such as childhood abuse (physical or sexual) and neglect can increase the risk for a whole host of problems as an adult such as depression, bipolar disorder, PTSD, and of course drug and alcohol abuse. There’s even a risk for more physical ailments like obesity, migraines, cardiovascular disease, diabetes, and more.
Childhood abuse/neglect = psychological and physical problems as an adult.
This idea doesn’t sound too controversial but believe it or not, the idea that a bad or stressful situation as a child would do anything to you as an adult was laughed away as not possible. It’s only within the last decade or so that a wealth of research has supported this idea that ELS can physically change the brain and that these changes can last through the abused child’s entire life.
This recent review paper (published in the journal Neuron) is an excellent, albeit technical, summary of dozens research papers done on this subject and the underlying biology behind their findings.
I especially love the quotes the author included at the beginning of the article:
And even more recently, yet another research paper has come out that highlights how important childhood is for the development of the brain and how a stressful childhood environment can impact the function of a person as an adult.
This most recent report, published in the journal Neuropscyhopharmacology concludes that early childhood abuse affects male and females differently. That is to say that the physical changes that occur in the brain are distinct for men and women who were abused as children.
Studies like this one are done by examining the brains of adults who were abused as kids and then comparing the activity or structure of different parts of the brain to the brains of adults who were not abused. The general technique of examining the structure or activity of the brain in a living human being is called neuroimaging and includes a range of techniques such as MRI, PET, fMRI, and others. (I’ve written about some of these techniques before. In fact, the development of better methods to image the brain is a huge are of research in the neuroscience field).
However, this study did not examine behavioral differences in the subjects, but as I said above, a great number of many other studies have looked at the psychological consequences of ELS. But this paper is really primarily interested in the gender differences in the brains of adults that have been abused as kids.
*Note: the following discussion is entirely my own and is not mentioned or alluded to by the author’s of this study at all.
This work—and the many studies that preceded it—has important implications because as a society, we have to realize that part of our personality/intelligence/character/etc. is determined by our genetics while the other part totally depends on the environment we are born into. I don’t want to extrapolate too much but the idea that childhood abuse can increase the risk of psychological problems as an adult also supports the broader notion that a great deal of a person’s success is determined by entirely random circumstances.
The science shows that a child born into a household rife with abuse will have more chance of suffering from a psychological problem (such as addiction) as an adult than someone who was born into a more stable life. The psychological problem could hurt that person’s ability to study in school or to hold down a job. And the tragic irony, of course, is that no child gets to choose the conditions under which they are born. A child, born completely without a choice of any kind over whether or not he or she will be abused, can still suffer the consequences of it (and blame for it) as an adult.
As a society, we often always blame a person’s failures as brought on by his or her own personal failings, but what if a person’s childhood plays an important role in why that person might have failed? How, as a society, do we incorporate this information into the idea of ourselves as having complete control over our minds and our destinies, when we very clearly do not? As an adult, how much of a person’s personality is really “their own problem” when research like this clearly show that ELS impacts a person well after the abuse has ended?
If the environment a child is born into has a tangible, physical effect on how the brain functions as an adult, than this problem is more than a social or an economic one: this is a matter of public health. Studies that support findings such as these provide empirical significance for public policy and public services for child care such as universal pre-K, increased availability of daycare, health insurance/medical access for children, increased and equitable funding for all public schools regardless of the economic situation of the district that school happens to be located in, etc.
One of our goals as a society (if indeed we believe ourselves to be a functioning society…the success of Donald Trump’s candidacy raises some serious doubts…but I digress) is the improvement of the lives of ALL of our citizens and securing the prosperity of the society for future generations. Reducing childhood poverty and abuse quite literally could help secure the future generations themselves and improve the ability of any child to grow up to become a successful and productive adult.
Public programs are essential because the unfortunate reality for many people born into poverty is that they must work all the time at low paying jobs in order to simply survive and may not be able to give their children all the advantages of a wealthier family. And this is where government and public policy step in, to correct the imbalances and unfairness inherent to the randomness of life and level the playing field for all peoples. Of course, the specific programs and policies to reduce childhood poverty and abuse would need to be evaluated empirically themselves to guarantee an important improvement in development of the brain and health of the child when he/she grows up.
And this is the real power of neuroscience and basic scientific research papers like this one. Research into how our brains operate in real-life situations reveal a side of our minds and our personalities that we never may have considered before and the huge implications this can have for society. The brain is a complex machine and just like other machines it can be broken.
Of course, we shouldn’t extrapolate too much and say that, for example, a drug addict who was abused as a child is not responsible for anything they’ve ever done in between. But is important to recognize all the mitigating factors at play in a person’s success and simply dismiss someone’s problems as “their own personal responsibility.” As a neuroscientist, I might argue that that phrase and the issues behind it are way more nuanced than the how certain politicians like to use it.
Special endnote Due to some recent shifts in my career, Dr. Simon Says Science will be expanding the content that I write about. Addiction and neuroscience will still be prominently featured but I plan to delve into a variety of other topics that I find interesting and sharing opinions that I think are important. I hope you will enjoy the changes! Thanks very much!
In a remarkable example of scientific collaboration, a new study produced by scientists at various research centers at the National Institutes of Health (NIH) have identified how ketamine works as powerful and fast-acting anti-depressant. This discovery may lead to an effective and potent new treatment for depression.
Ketamine is normally used as an anesthetic but at low doses, it has been shown to have rapid acting and long-lasting anti-depressant effects in humans. Fast relief of depression is incredibly important because most anti-depressant medications are not very effective or can take weeks (or even months in some cases) for maximal effect, which hurts the recovery of patients suffering from this crippling psychiatric disorder. However, despite its rapid action, ketamine has many side effects such as euphoria (a “high” feeling), dissociative effects (a type of hallucination involving a sense of detachment or separation from the environment and the self), and it is addictive.
If ketamine could be made safe to use without any of its other more dangerous properties, it would be a powerful anti-depressant medication.
With this goal in mind, scientists at the National Institute of Mental Health (NIMH), National Institute on Aging (NIA), National Center for Advancing Translational Sciences (NCATS), University of Maryland, and University of North Carolina-Chapel Hill sought to unravel the mystery of how ketamine works.
When ketamine enters the body it is broken down (metabolized) into many other chemical byproducts (metabolites). The team of scientists identified that it’s not ketamine itself but one of it’s metabolites, called HNK, that is responsible for ketamine’s anti-depressant action Most importantly HNK does not have any of the addictive or hallucinogenic properties of ketamine. What does this mean? This special metabolite can now be produced and can be given to patients while ketamine (and all its unwanted negative side effects) can be bypassed.
Of course, many tests still need to be done in humans to confirm the effectiveness of HNK, but the study is an amazing example of how an observation can be made in the clinic, brought in the lab for detailed analysis, and then brought back to the clinic as a potential effective treatment.
But how did the scientist’s do it and how do they know that this HNK is what’s responsible for ketamine’s depression-fighting power? Keep reading below to find out.
Ketamine has traditionally been used an as anesthetic due to it’s pain relieving and consciousness-altering properties . However, at doses too low to induce anesthesia, it has been shown that ketamine has the ability to relieve depression . Even more remarkably, the anti-depressant effects of ketamine occur within a few hours and can last for a week with only a single dose. Most anti-depressant medications can take weeks before they start relieving the symptoms of depression (this is due to how those medications work in the brain).
However, ketamine also has unwanted psychoactive properties, which limits its usefulness in the treatment of depression. Ketamine causes an intense high or sense of euphoria as well as hallucinogenic effects such as dissociation, a bizarre sense of separation of the mind from the self and environment. Ketamine is also addictive and is an abused party drug .
A debate has been going about whether ketamine should be used for the treatment of depression and if its risks outweigh its benefits . However, what if ketamine itself is not responsible for the anti-depressant function but a chemical byproduct of ketamine? This is what the scientist’s in this study reported: it’s HNK and not ketamine that are responsible for the powerful anti-depressant functions. This discovery was made in mice but how do scientists even study depression in a mouse?
How do scientists study depression in rodents?
Depression is a complex psychological state that is difficult to study but scientists have developed a number of tests to measure depressive-like behavior in rodents. While any one particular test is probably not good enough to measure depression, the combination of multiple tests—especially if similar results are found for each test—provide an accurate measurement of depression in rodents.
Some of the tests include:
Forced Swim Test
As the name reveals, in this test rodents are place in a cylinder of water in which they cannot escape are a forced to swim. Mice and rats are very good swimmers and when placed in the water will swim around for a while, searching for a way to escape. However, after a certain amount of time, the mouse will “give up” and simply stop swimming and will just float there. This “giving up” is used as a proxy for depression, similar to how people that are depressed often lack perseverance or motivation to keep trying. If you a give drug and the mice swim for much longer than without the drug, then you can make the argument that the drug had an anti-depressant effect. See this video of a Forced Swim Test.
Learned Helplessness Test
One theory of depression is that it can result from being placed in a bad situation in which we have no control over. This test models this type of scenario.
First, mice are place in chamber where they experience random foot shocks (the learning about the bad, hopeless situation). Next, they are place in a chamber that has two compartments. When a foot shock occurs, a door opens to a “safe” chamber, which gives the mouse an opportunity to escape the bad situation. One measure of depression is that some mice won’t try to escape or will fail to escape. In essence, they’ve given up at trying to escape the bad situation (learned helplessness). You can then take these “depressed” mice, and run the experiment again but this time with the anti-depressant drug you want to test and see how they do at escaping the foot shocks. Read more here.
Chronic Social Defeat Stress
Imagine you had a bully that would beat you up every day but the bully lived next door to you and would stare at you through his bedroom window? It would probably make you feel pretty crummy, wouldn’t it? Well, in essence, that’s what chronic social defeat stress test is all about .
A male mouse is placed in a cage with a much larger, older, and meaner male mouse that then attacks it. After the attack session, the “victim” mouse is housed in a cage where it can see and smell the bigger mouse. This induces a sense of hopelessness or depression in the “victim” mouse and it will not try to interact with a “stranger”” mouse if given a choice between the stranger and an empty cage (mice are pretty curious animals and will usually sniff around a cage with a unfamiliar mouse in it). This social avoidance is a measure of depression. In contrast, some mice will be resilient or resistant to this type of stress and will interact normally with the “stranger” mouse. Similar to above, you can test an anti-depressant drug in the “resilient” mice and the “depressed” mice.
There are a few others but these are three of the main ones used in this paper.
How did the NIH scientists figure out how Ketamine works to fight depression?
It was believed that ketamine’s anti-depressant function was due to its ability to inhibit the activity of the neurotransmitter glutamate. Specifically, ketamine inhibits a special target of glutamate called the NMDA receptor .
The first thing done is this paper was to study ketamine’s effects in rodent models of depression and sure enough, it was effective at relieving depression-like behavior in the mice.
Ketamine comes in two different chemical varieties or enantiomers, R-ketamine and S-ketamine. Interestingly, the R-version was more effective than the S-version (this will be more important later).
Recall that ketamine is though to work because it inhibits the NMDA receptor, but the scientists found that another drug, MK-801, that also directly inhibits the NMDA receptor, did have the same anti-depressant effects. So what is it about ketamine that makes it a useful anti-depressant then if not it’s ability to inhibit the NMDA receptor?
Ketamine is broken down into multiple different other chemical byproducts or metabolites once it enters the body. The scientists were able to isolate and measure these different metabolites from the brains of mice. For some reason one of the metabolites, (2S,6S;2R,6R)-hydroxynorketamine (HNK) was found to be three times higher in females compared to males. Ketamine was also more effective at relieving depression in female mice compared to male mice and the scientists wondered: could it be because of the difference in the levels of the ketamine metabolite HNK?
To test this, a chemically modified version of ketamine was produced that can’t be metabolized. Amazingly the ketamine that couldn’t be broken down did not have any anti-depressant effects. This finding strongly suggests that it’s really is one of the metabolites, and not ketamine itself, that’s responsible for the anti-depressant activity. The most likely candidate? The HNK compound that showed the unusual elevation in females vs males.
Similar to ketamine, HNK comes in two varieties, (2S,6S)-HNK and (2R,6R)-HNK. The scientists knew that the R-version of ketamine was more potent than the S-version so they wondered if the same was true for HNK. Sure enough, (2R,6R)-HNK was able to relieve depression in mice while the S-version did not. The scientists appeared to have identified the “magic ingredient” of ketamine’s depression-relieving power.
These experiments required a great deal of sophisticated and complex analytical chemistry. However, this is beyond my area of expertise so unfortunately cannot discuss it further.
So now the team had what they thought was the “magic ingredient” from ketamine for fighting depression. But could they support their behavior work with more detailed molecular analyses?
The next step was to look at the actual properties of neurons themselves and see if (2R,6R)-HNK changed their function in the short and long term. Using a series of sophisticated electrophysiology experiments in which the activity of individual neurons can be measured, the scientists found that glutamate signaling was indeed disrupted. However, it appeared that a different type of glutamate receptor was involved: the AMPA receptor, and not NMDA receptor. The scientists confirmed this with protein analysis; components of the AMPA receptor increased in concentration in the brain over time. These data suggest that it is alterations in glutamate-AMPA signaling that underlies the long-term effectiveness of HNK.
OK, so great! HNK reduces depression but does it still have all the other nasty side effects of ketamine? If it does, then it’s no better than ketamine itself.
For the final set of experiments, the scientists looked at the psychoactive and addictive properties of ketamine. Using a wide range of behavioral tests that I won’t go into the details of, 2R,6R)-HNK had a much lower profile of side effects than ketamine.
Finally, ketamine is an addictive substance that can and is abused illegally. A standard test of addiction in mouse models is self-administration (I’ve discussed this technique previously). Mouse readily self-administer ketamine, which indicates they want to take more and more of it, just like a human addict. However, rodent’s do not self-administer HNK! This means that HNK is not addictive like ketamine.
In conclusion, (2R,6R)-HNK appears to be extremely effective at relieving depression in humans, has less side-effects than ketamine, and is not effective. Sounds pretty good to me!
Next step: does HNK work in humans? To be continued….
Peltoniemi MA, et al. Ketamine: A Review of Clinical Pharmacokinetics and Pharmacodynamics in Anesthesia and Pain Therapy. Clinical pharmacokinetics. 2016.
Newport DJ, et al. Ketamine and Other NMDA Antagonists: Early Clinical Trials and Possible Mechanisms in Depression. The American journal of psychiatry. 2015;172(10):950-66.
Morgan CJ, et al. Ketamine use: a review. Addiction. 2012;107(1):27-38.
Sanacora G, Schatzberg AF. Ketamine: promising path or false prophecy in the development of novel therapeutics for mood disorders? Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology. 2015;40(5):1307.
Hollis F, Kabbaj M. Social defeat as an animal model for depression. ILAR journal / National Research Council, Institute of Laboratory Animal Resources. 2014;55(2):221-32.
Abdallah CG, et al. Ketamine’s Mechanism of Action: A Path to Rapid-Acting Antidepressants. Depression and anxiety. 2016.
Of course, we all know the prevalence and extent of underage drinking, and the damage alcohol has on the developing brain has been heavily researched, not to mention all the significant secondary problems associated with alcohol abuse (car crashes, sexual assault on college campuses, falling off of balconies… ).
But here’s some numbers anyways: as of 2013, 8.7 million youths aged 12-20 reported past month alcohol use, a shockingly high number for an age group this is not legally allowed to drink alcohol…
Similarly, marijuana, which is still illegal in the vast majority of the US, is nearly as ubiquitous. According to the NSDUH 2013 survey, 19.8 million adults aged 18 or older reported past month marijuana use.
But what if the risk of use of alcohol and marijuana by youths could be reduced? What if a teacher could be given the tools to not only identify certain risky personality traits in their students but also use that knowledge to help those at-risk students from trying and using drugs such as alcohol and marijuana? A series of studies coming out of the laboratory of Dr. Patricia A Conrod of King’s College London report having done exactly that.
I had the pleasure of seeing Dr. Conrod speak at the recent Society for Neuroscience Conference as part of a satellite meeting jointly organized by the National Institute on Drug Abuse (NIDA) and National Institute on Alcohol Abuse and Alcoholism (NIAAA). Dr. Conrod presented a compelling story spanning over a decade of her and her colleague’s work, in which certain personality traits amongst high risk youths, can actually be used to predict drug abuse amongst those kids. Dr. Conrod argues that by identifying different risk factors in different adolescents, a specific behavioral intervention can be designed to help reduce alcohol drinking and marijuana use in these youths. And who is best to administer such an intervention? Teachers and counselors, of course: educators that spend a great deal of time interacting with students and are in the best position to help them.
This ambitious study recruited 2,643 students (between 13 and 14 years old) from 21 secondary schools in London (20 of the 21 schools were state-funded schools). Importantly, this study was a cluster-randomized control trial, which means the schools were randomly assigned to two groups: one group received the intervention while the other did not. The researchers identified four personality traits in high-risk (HR) youths that increase the risk of engaging in substance abuse. The four traits are:
A specific intervention based on cognitive behavioral therapy (CBT) and motivational enhancement therapy (MET) was developed to target each of these personality traits. Teacher, mentors, counselors, and educational specialists in each school that were recruited for the study were trained in the specific interventions. In general, CBT is an approach used in psychotherapy to change negative or harmful thoughts or the patient’s relationship to these thoughts, which in turn can change the patient’s behavior. CBT has been effective in a treating a number of mental disorders such anxiety, personality disorders, and depression. MET is an approach used to augment a patient’s motivation in achieving a goal and has mostly been employed in treating alcohol abuse.
The CBT and MET interventions in this study were designed to target one of the four personality traits (for example, anxiety reduction) and were administered in two 90-minute group sessions. The specific lesson plans for these interventions were not reported in the studies but included workbooks and such activities as goal-setting exercises and CBT therapies to help students to dissect their own personal experiences through identifying and dealing with negative/harmful thoughts and how those thoughts can result in negative behaviors. Interestingly, alcohol and drug use were only a minor focus of the interventions.
The success of the interventions was determined through self-reporting. The student’s completed the Reckless Behavior Questionnaire (RBQ), which is based on a six-point scale (“never” to “daily or almost daily”) to report substance use. Obviously due to the sensitive nature of these questionnaires and need for honesty by the students, measures were taken to ensure accuracy in the self-reporting, such as strong emphasis on the anonymity and confidentiality of the reports and inclusion of several “sham” items designed to gauge accuracy of reporting over time. Surveys were completed every 6-months for 24-months (two years) which is a sufficient time frame to assess the effect of the interventions.
Most importantly, schools were blinded to which group they were placed in and teachers and students not involved in the study were not aware of the trial occurring at the school. The students involved were unaware of the real purpose and scope of the study. These factors are important to consider because it held eliminate secondary effects and helps support the direct efficacy of the interventions themselves.
The results were impressive: reduced frequency and quantity of drinking occurred in the high-risk students that received the intervention compared to the control students that did not. While HR students were overall more likely to report drinking than low-risk (LR) students, the HR students saw a significant effect of the personality-targeted interventions on drinking behavior.
A study of this size is incredibly complex and the statistics involved are equally complex. The author’s analyzed the data in a number of ways and published the results in several papers. A recent study modeled the data over time (the 24-months in which the surveys were collected) and used these models to predict the odds that the students would engage in risky drinking behavior. The authors reported a 29% reduction in odds of frequency of drinking by HR students receiving the interventions and a 43% reduction in odds of binge drinking when compared to HR students not receiving the interventions.
Interestingly, the authors report a mild herd-effect in the LR students. Meaning that they believe the intervention slowed the onset of drinking in the LR students possibly due to the interactions between the HR student’s receiving the interventions and LR students. However, additional studies will need to be done in order to confirm this result.
Recall that the Reckless Behavior Questionnaire (RBQ) was utilized in this study to quantify drug-taking behavior. While the study was specifically designed to measure effects on alcohol, the RBQ also included questions about marijuana. So the authors reanalyzed their data and specifically looked at effects of the interventions on marijuana use.
The found that the sensation seeking personality sub-type of HR students that received an intervention had a 75% reduction in marijuana use compared to the sensation seeking HR students that did not receive the intervention. However, unlike the findings found on alcohol use, the study was not able to detect any effect on marijuana use for the HR students in general. Nevertheless, the data suggest that the teacher/counselor administered interventions are effective at reduce marijuana use as well.
While you may be unconvinced by the modest reduction in drinking and marijuana frequency reported in these studies and may be skeptical of the long-term effect on drug use in these kids, keep in mind that the teachers and counselors that administered these interventions received only 2 or 3 days of training and the interventions themselves were very brief, only two 90-minute sessions. What I find remarkable is that such a brief, targeted program can have ANY effects at all. And most importantly, the effects well outlasted the course of the interventions for the full two-years of the follow-up interviews.
These targeted interventions have four main advantages:
Administered in a real-world setting by teachers and counselors
Brief (only two 90-minute group sessions)
Cheap (the cost of training and materials for the group sessions)
The scope of this intervention needs to be tested on a much larger cohort of students in a larger variety of neighborhoods but it is extremely promising nonetheless. Also, it would be interesting to breakdown these data by race, socioeconomic status, and gender, all of which may impact the effectiveness of the treatments and was not considered in this analysis. Finally, how would you implement these interventions on a wide scale? I eagerly look forward to additional work on these topics.
Thanks for reading 🙂
See these other articles in Time and on King’s College for less detailed discussions of these studies.
Also see these related studies from Conrod’s group:
The third and final part of my three part guest blog series on Optogenetics has been published on the Addgene blog. Addgene is a nonprofit organization dedicated to making it easier for scientists to share plasmids and I’m thrilled to be able to contribute to their blog! This post covers the running behavioral experiments utilizing optogenetics.
The second part of my three part guest blog series on Optogenetics has been published on the Addgene blog. Addgene is a nonprofit organization dedicated to making it easier for scientists to share plasmids and I’m thrilled to be able to contribute to their blog! This post covers the material science aspects of running optogenetic experiments.
The first part of my three part guest blog series on Optogenetics has been published on the Addgene blog. Addgene is a nonprofit organization dedicated to making it easier for scientists to share plasmids and I’m thrilled to be able to contribute to their blog!
The biological sciences are in a golden era: the number of advanced technological tools available coupled with innovations in experimental design has led to an unprecedented and accelerating surge in knowledge (at least as far as the number of papers published is concerned). For the first time in history, we are beginning to ask questions in biology that were previously unanswerable.
No field demonstrates this better than genetics, the study of DNA and our genes. With the advent of high-throughput DNA sequencing, genetic information can be acquired literally from thousands of individuals and even more remarkably, can be analyzed in a meaningful way. Genomics, or the study of the complete set of an organism’s DNA or its genome, directly applies these advances to probe answers to questions that are literally thousands of years old.
A recent study, a collaborative effort from scientists in Iceland, the Netherlands, Sweden, the UK, and the US, is an example of power of genomics and to answer these elusive questions.
The scientists posed an intriguing question: if you are at risk for a psychiatric disorder, are you more likely to be creative? Is there a link between madness and creativity?
Aristotle himself once said, “no great genius was without a mixture of insanity” and indeed, the “mad genius” archetype has long pervaded our collective consciousness. But Vincent Van Gogh cutting off his own ear or Beethoven’s erratic fits of rage are compelling stories but can hardly be considered empirical, scientific evidence.
But numerous studies have provided some evidence that suggests a correlation between psychiatric disorders and creativity but never before has an analysis of this magnitude been performed.
Genome-wide association studies (GWAS) take advantage of not only the plethora of human DNA sequencing data but also the computational power to compare it all. Quite literally, the DNA of thousands of individuals is lined up and, using advance computer algorithms, is compared. This comparison helps to reveal if specific changes in DNA, or genetic variants, are more common in individuals with a certain trait. This analysis is especially useful in identifying genetic variants that may be responsible for highly complex diseases that may not be caused by only a single gene or single genetic variant, but are polygenic, or caused by many different genetic variants. Psychiatric diseases are polygenic, thus GWAS is useful in revealing important genetic information about them.
This video features Francis Collins, the former head of the Human Genome Project and current director of the National Institutes of Health (NIH), explaining GWAS studies. The video is 5 years old but the concept is still the same (there’s not many GWAS videos meant for a lay audience).
The authors used data from two huge analyses that previously performed GWAS on individuals with either bipolar disorder or schizophrenia compared to normal controls. Using these prior studies, the author’s generated a polygenic risk score for bipolar disorder and for schizophrenia. This means that based on these enormous data sets, they were able to identify genetic variants that would predict if a normal individual is more likely to develop bipolar disorder or schizophrenia. The author’s then tested their polygenic risk scores on 86,292 individuals from the general population of Iceland and success! The polygenic risk scores did associate with the occurrence of bipolar disorder or schizophrenia.
Next, the scientists tested for an association between the polygenic risk scores and creativity. Of course, creativity is a difficult thing to define scientifically. The authors explain, “a creative person is most often considered one who take novel approaches requiring cognitive processes that are different from prevailing modes of thought.” Translation: they define creativity as someone who often thinks outside the box.
In order to measure creativity, the authors defined creative individuals as “belonging to the national artistic societies of actors, dancers, musicians, and visual artists, and writers.”
The scientists found that the polygenic risk scores for bipolar disorder and schizophrenia each separately associated with creativity while five other types of professions were not associated with the risk scores. An individual at risk for bipolar disorder or schizophrenia is more likely to be in creative profession than someone in a non-creative profession.
The authors then compared a number of other analyses to see if this effect was due to other factors such as number of years in school or having a university degree but this did not alter the associations with being in a creative field.
Finally, the same type of analysis was done with two other data sets: 18,452 individuals from the Netherlands and 8,893 individuals from Sweden. Creativity was assessed slightly differently. Once again creative profession was used but also data from a Creative Achievement Questionnaire (CAQ), which reported achievements in the creative fields described above, was available for a subset of the individuals.
Once again, the polygenic risk scores associated with being in a creative profession to a similar degree as the Icelandic data set; a similar association was found with the CAQ score.
The authors conclude that the risk for a psychiatric disorder is associated with creativity, which provides concrete scientific evidence for Aristotle’s observation all those years ago.
However, future analyses will have to broaden the definition of creativity beyond just narrowly defined “creative” professions. For example, the design of scientific experiments involves a great deal of creativity but is not considered a creative profession and is therefore not included in these analyses, and a similar argument could be made with other professions. Also, no information about which genetic variants are involved or what their function is was discussed.
Nevertheless, this exciting data is an example of the power that huge genomic data sets can have in answering fascinating questions about the genetic basis of human behavior and complex traits.
For further discussion, read the News and Views article, a scientific discussion of the paper, which talks about potential evolutionary mechanisms to explain these associations.
This is a fascinating question in neuroscience and at the very core of what makes us human. After all, our entire concept of ourselves is defined by our memories and without them, are we even ourselves? This is a pretty lofty philosophical discussion… but today we’re only interested in the neuroscience of memory.
In specific, what happens to individual neurons in the human brain when a new memory is created and recalled?
Researchers at the University of California-Los Angeles performed a study in humans that has shed some light on this important question. Published recently in the journal Neuron, the novelty of the study involved recording how many times a neuron would fire during a specially designed memory test. In other words, the scientists were able to monitor what happened to individual neurons in a human being as a new memory was being created!
This article is open access (able to downloaded and distributed for free). The article can be found here or download the pdf.
Before I go into what the researchers found, let’s see how it was done.
The subjects in the study were patients being treated for epilepsy. As part of their clinical diagnosis, they had been implanted with an electrode, a tool used to measure neuronal activity or in other words, the electrode measures how often a neuron fires. The fact these patients already had an electrode inserted into the brain for clinical reasons made it convenient for the researchers to conduct this study.
The brain region in which the electrode was implanted is called the medial temporal lobe (MTL). The image to the right is of the left human temporal lobe. The medial region of the temporal lobe is located more towards the center of the brain.
One specific region of the MTL, the hippocampus, is believed to be the primary brain region where memories are “stored”. Specifically, previous studies in animals and humans have suggested that the MTL and hippocampus are very important to encoding episodic memory. Episodic memory involves memories about specific events or places. In this study, the example of episodic memory being used is remembering seeing a person at a particular place. Another example: the game Simon™ can be considered a test of your brain’s ability to rapidly create and recall short-term episodic memories!
*Note: Episodic memory is considered one of the main bifurcations of declarative memory, or memories that can be consciously recalled. The other type of declarative memory is semantic memory, which are memories of non-physical/tangible things, like facts.
To test the episodic memory of remembering a person at a particular place, images were presented to the patients while the neurons were being recorded. There were 5 different tasks (all completed within 25-30min). See Figure 1 below from the paper.
First, a pre-screening was done in which the patients was shown many random images of people and places. The activity of multiple neurons was recorded and the data was quickly analyzed then 3-8 pairs of images were compiled. In each pair, 1 image was “preferred” or “P” image, meaning the neurons being recorded fired when the “P” image was shown. The second image was “non-preferred” or “NP” image, meaning the neurons did not respond to it when it was shown.
The first task is the “Screening” test. Each “P” and “NP” image was shown individually and the neurons response to each was recorded. As you would expect, the neuron would fire heavily to the “P” image and not very much to the “NP” image.
The second task was the “learning task” in which a composite image of each of the “P” and “NP” image pairs was made. The person in the “P” image was digitally extracted and placed in front of the landmark in the “NP” image. After the composite images were shown, the individual images were shown again.
For example, in one image pair for one patient, the “P” image was a member of the patient’s family while the “NP” image was the Eiffel Tower (for this example, see Figure 2). The composite image in the “learning” task was the family member in front of the Eiffel Tower. Another example of a “P” image was Clint Eastwood and the “NP” image was the Hollywood sign. The composite image would therefore be Clint Eastwood in front of the Hollywood sign. (However, in some image pairs the “P” image was a place and “NP” image was a person).
The third task was “assessing learning”. The image of just the person in the composite image was shown and the patient had to pick out the correct landmark he/she was paired with. For example, the picture of the family member was shown and the patient would have to pick out the Eiffel Tower image.
The fourth task was the “recall” task. The landmark image was shown and the patient had to remember and say the person it was paired with. For example, the Eiffel Tower was shown and the patient had to say the family member’s name.
Finally, the fifth task was a “re-screening” in which each individual image was shown again so the neuron’s activity could be compared to the Task 1, pre-learning.
The activity of multiple neurons were recorded for each image for each of the tasks. The data was then analyzed in number of different ways and the activity of different neurons was reported.
And what was found?
Let’s go back to the family member/Eiffel tower example. The researchers were able to show that a neuron in the hippocampus responded heavily to the picture of the family member (“P” image) but not to the Eiffel Tower (“NP” image). After showing the composite image, the neuron now responded to the Eiffel Tower too in addition to the family member! (The neuron also fired a comparable amount to the individual family member image as the composite image).
As you can see in Figure 2, each little red or blue line indicates when a neuron fired. For example, in Task 1 you can clearly see more firing (more lines) to the “P” image than the “NP” image. You can see that after Task 2, the neuron responds to either the “P” or “NP” image (especially obvious in the Task 5). The middle graph indicates the firing rate of the neurons to the “NP” image and it clearly shows increased firing rate of the neuron after learning (AL) compared to before learning (BL). It may look small, but the scientists calculated a 230% increase in firing rate of the neuron to “NP” image after the learning/memory task took place!
What does this mean? It means that a new episodic memory has been created and a single neuron is now firing in a new pattern in order to help encode the new memory!
This was confirmed the other way around too. In another patient, the “P” neuron was the White House and the “NP” image was beach volleyball player Kerry Walsh. The neuron that was being recorded fired a lot when the image of the White House was shown but not so much for the Kerri Walsh image. Then the composite image was shown and the learning/recall tasks were performed. The neuron was shown to fire to both the White House image AND the Kerry Walsh image! The neuron was responding to the new association memory that was created!
Keep in mind these are just two examples. The scientists actually recorded from ~600 neurons in several different brain regions besides the hippocampus but they only used about 50 of them that responded to visual presentation of the “P” image, either a person or a landmark (the identification of visually responsive neurons was crucial part of the experiment). Remarkably, when the firing rates of all these neurons was averaged before and after the memory/learning tasks, a similar finding to the above examples was found: the neuron now responded to the “NP” image after the composite was shown!
Many other statistical analyses of the data was done to prove this was not just a fluke of one or two neurons but was consistent observation amongst all the neurons studied but I won’t go into those details now.
But what’s going on here? Are the neurons that respond to the “P” stimulus now directly responding to the “NP” image or is more indirect, some other neuron is responding to the “NP” which in turn signals to the “P” neuron to increase in firing? The authors performed some interesting analyses that both of these mechanisms may apply but for different neurons.
Finally, were all the recorded neurons that were engaged in encoding the new episodic memory located in the hippocampus? The answer is no. Responsive neurons were identified in several brain regions besides the hippocampus including the entorhinal cortex and the amygdala. But most of the responsive cells were located within the parahippocampal cortex, a region of the cortex that surrounds the hippocampus, thus not surprising it is very involved in encoding a new memory.
In conclusion, the scientists were able to observe for the first time the creation of a new memory in the human brain at the level of a single neuron. This is an important development but such a detailed analysis has never before been done in humans and, most importantly, in real time. Meaning, the experiment was able to observe the actual inception of a new memory at the neuronal level.
However, one major limitation is that the activity of these neurons were not studied in the long term so it’s unknown if the rapid change in activity is a short-term response to the association of the two images or if it really represents a long-term memory. The authors acknowledge this limitation but the problem is really in the difficulty of doing such studies in humans. It’s not really ethical to leave an electrode in someone’s brain just so that you can test them every week!
But what does all of this mean? The authors do suggest that the work may help to resolve a debate that has been going in on the psychology field since the 40s. Do associations form gradually or rapidly? These results strongly suggest new neurons rapidly respond to encode the new memory formation.
But how will these results shape the neuroscience of memory? The answer is I don’t know and no one does. Thus is the rich tapestry of neuroscience, another thread weaved by the continuing work of scientists all over the world in order to understand what it is that makes us human: our brains.