Rice University logo
 
 

Archive for the ‘Volume 2 (Spring 2009)’ Category

Reading the Mind: Magic or Medicine?

by: Tina Munjal, ’12

(You may also see the full spread of this article in a PDF.)

In J.K. Rowling’s hit series Harry Potter, lead characters Albus Dumbledore and Lord Voldemort practice Legilimency, or the ability to “mind read” the emotions and thoughts of other people. Such a concept need not be dismissed simply as a crafty plot device in a children’s fairy tale any longer, for radiological science has brought a sort of real-life Legilimency into the realm of possibility through functional magnetic resonance imaging, or fMRI. fMRI operates by the principle that active areas of the brain require increased blood flow due to the greater oxygen requirements of the neurons in those portions. An fMRI machine utilizes the difference in the magnetic properties of oxyhemoglobin and deoxyhemoglobin to determine which regions of the brain are active during particular cognitive tasks [2]. Although fMRI may not be entirely magical, the implications for such a technology are nevertheless vast and varied. Indeed, fMRI is creating a stir in communities of scientists, ethicists, lawyers, educators, and physicians.

For example, fMRI has the potential to be used as a legal arbiter, incriminating the guilty and vindicating the innocent, by detecting patterns of activity in the brain associated with dishonesty. While tests like the polygraph can be manipulated by the subject through altered physiological responses, fMRI read-out is more resistant to conscious control [1]. Two American companies, Cephos and NoLie MRI, have already released their prototypes of fMRI-based lie-detection systems. The founder of Cephos, Steven Laken, claims that his company’s system works accurately 95% of the time. Laken predicts that information acquired through such neuroimaging technologies may be acceptable for use as evidence in court for civil cases as early as next year [5].

Scholars have their reservations, however. Firstly, fMRI is not foolproof. The changes in hemodynamic response that fMRI measures can be affected by a range of factors, including age, fitness, and medications [3]. Furthermore, the majority of deception pattern studies have been conducted on subjects, frequently college students, who are asked to commit inconsequential dishonesty during their scans. The results obtained from these studies cannot necessarily be applied to real-life situations in which the nature of the lies is starkly different and the potential consequences much more grave [3]. Furthermore, the barrier between inner thought and outer communication, the sense of “mental privacy” [1], could quickly be degraded with such potentially intrusive applications of fMRI. Another significant impediment to the advancement of so-called neurolaw is the Fifth Amendment to the U.S. Constitution, which proscribes coerced self-incrimination [5]. Thus, although fMRI could one day be used to exonerate innocent individuals, this benefit would come only at the expense of the rights of the accused.

The controversy is not limited to the courtroom, however, as it could potentially also threaten the classroom dynamic [4]. If fMRI really does reveal the inner workings of the brain and mind, what could possibly serve as a better assessor of academic potential? The SAT would undoubtedly be left in the dust. However, current imaging technology is not yet sensitive to the fact that young minds are still developing and immensely plastic. Thus, the margin of error for fMRI is increased in children [4]. In other words, students ought not to throw away their preparatory books just yet. Tests like the SAT are here to stay for quite some time.

On a related note, it has been suggested that fMRI could one day be used on infants to forecast the likelihood of later brain dysfunction. Once again, however, we find that the developing brain can sometimes be very unpredictable [4]. Prematurely labeling a child as mentally deficient could cause the child to be needlessly stigmatized, as well as deprived of a full education and even of insurance coverage [6]. Perhaps it is better, then, that fMRI is not yet capable of completely accurate prognoses, seeing as that society is yet unprepared to deal with the implications.

True, fMRI is not immediately going to change the way we govern, learn, and diagnose, but the potential is there. While we wait for technology to catch up with the ideas of our times, it is imperative that a solid ethical and logistical framework is constructed so that we can be prepared to put to our benefit, rather than to our detriment, the “magic” of functional neuroimaging once it arrives in full force.

Tina Munjal is a freshman double-majoring in Biochemistry & Cell Biology and Cognitive Sciences at Wiess College.

References

1. Bles, M., Haynes, J.D. Detecting concealed information using brain-imaging technology. Neurocase. 2008, 14, 82-92
2. BOLD functional MRI. http:// lcni.uoregon.edu/~ray/ powerpoints/lecture_10_24.ppt (accessed January 16, 2009)
3. Deceiving the law. Nat. Neurosci. 2008, 24, 1739-1741
4. Downie, J., Schmidt, M., Kenny, N., D’Arcy, R., Hadskis, M., Marshall, J. Paediatric MRI Research Ethics: The Priority Issues. J. Bioethical Inquiry. 2007, 4, 85-91
5. Harmanci, R. Complex brain imaging is making waves in court. San Francisco Chronicle [Online]. October 17, 2008. http://www.sfgate.com/cgi-bin/article.cgi?f=/c/ a/2008/10/17/MN8M13AC0N.DTL (accessed January 16, 2009)
6. Illes, J. Neuroethics in a New Era of Neuroimaging. Am. J. Neuroradiol. 2003, 24, 1739-1741.

The Effect of the Perinatal Condition on Adolescent Anti-Social Behavior

by: Manivel Rengasamy, ’10

(You may also see the full spread of this article in a PDF.)

Adolescence is a tumultuous time developmentally, characterized by dramatic physiological and psychological changes. In addition, adolescence is often a common period for the occurrence of mental disorders, ranging from mood disorders such as depression or bipolar disorder to eating disorders such as anorexia nervosa [2]. Furthermore, a series of developmental time periods may influence or indicate vulnerability to mental disorders in adolescence, and one of the first such stages lies during the perinatal phase, the time five months before to one month after birth. Adverse perinatal events, such as unstable maternal emotional state, poor maternal physical state, substance abuse, or smoking, are likely lead to mental disorders later in life [1].

Although much minor research has been done on such events, previous conclusive long-term research on mildly adverse perinatal events in reference to anti-social behavior has not been conducted. Recently, Nomura et al. investigated the effects of perinatal events on anti-social behavior during adolescence [5]. Over a period of thirty years, 1500 random individuals born to mothers involved in the Johns Hopkins Collaborative Perinatal Study site were examined. Perinatal markers, such as birthweight, head circumference, and 5-minute Apgar scores (scores reflecting infant’s general health), were observed at birth. Lower birthweight, smaller head circumference, and lower Apgar scores would be indicative of sub-primal perinatal conditions. Children with varying perinatal markers were evaluated throughout childhood for neurological problems, communication problems, academic and cognitive abilities, and adolescent anti-social behavior. After statistical analysis of data using a SEM analysis model, indirect associations were found between perinatal birth conditions and most evaluated developmental difficulties in both males and females, most notably anti-social behavior during adolescence. The pathway of perinatal condition to adverse behavioral outcome was confirmed, as poor perinatal status was correlated with a developmental problem. In a repetitive fashion, these developmental problems then positively correlated with each subsequent developmental problem. For example, poor perinatal status correlated with neurological problems, which then correlated with communication problems.

This research is vital since it provides a study of relatively normal-term infants, rather than early or late-term infants. Although early and late-term infants present many mental problems, normal-term infants should also be a target population studied for future problems [3]. Thus, perinatal research has great incentive directly based on its findings. By allowing for a method to directly correlate moderately poor perinatal status to future anti-social behavior, the research can provide better understanding to help prevent potential anti-social behavior and problems. Early in life, such individuals can be identified based on perinatal status and supported properly in subsequent developmental steps. By conclusively identifying that a general pathway of developmental problems starting with poor perinatal status does exist, anti-social difficulties in adolescence can be limited by examining various stages of child development and resolving cognitive and emotional issues. Future study and research could seriously reduce anti-social behavior, which is often a factor in multiple mental disorders.

Children born in perinatal conditions also may have more modifiable outcomes in comparison to early or late-term infants, allowing this research to have a serious profound effect on the lives of many future children. Perinatal research can help in early identification and support of children at risk for neurological, emotional, or cognitive problems. Based on simple scientific tests that could be conducted by nurses, individuals at risk could be recognized.

Early detection of these problems could aid the educational system financially by promoting earlier and less costly treatment. Educators might be able to better lead students toward academic and social success with specialized management. In addition, parents could be more prepared to work with the educational system and their children since these parents will be equipped with knowledge of the difficulties their children might face [4]. Lastly, the healthcare system would be well-served as doctors would be able to efficiently use time to treat medical ailments and not spend time making diagnoses. Although Nomura’s research does not have complete external validity (88% of infants were of African American descent), future studies could also incorporate a more representative population sample, allowing implementation of these beneficial actions based on this research.

In the field of child development, this research has laid a significant stepping stone on which to reduce anti-social behavior in adolescence, as well as other psychological problems throughout childhood. The longitudinal study of Nomura et al. has made significant progress in understanding the long-term effect of the comprehensive perinatal condition and its prediction ability of developmental problems. Future research can help to identify if an actual cause-effect relationship exists between perinatal status and developmental psychological outcomes, such as anti-social disorder. Clearly, the research regarding the perinatal status and its effect on anti-social behavior in adolescents is a major step forward in child development and psychology.

Manivel Rengasamy is a junior double-majoring in Biochemistry & Cell Biology and Psychology at Lovett College.

References

1. Allen N. B., Lewinsohn P. N., & Seeley J. R. Prenatal and perinatal influences on risk for psychopathology in childhood and adolescence. Development and Psychopathology 2008; 10, 513±529
2. Bhatia SK, Bhatia SC. Childhood and adolescent depression. Am Fam Physician 2007; 75: 73-80, 83-84
3. Kenworthy OT, Bess FH, Stahlman MT, Lindstrom DP. Hearing, speech, and language outcome in infants with extreme immaturity. Am J Otol 1987;8:419-425
4. Litt J., Taylor HG, Klein N, Hack M. Learning disabilities in children with very low birthweight: Prevalence, neuropsychological correlates, and educational interventions. Journal of Learning Disabilities 2005;38:130–141
5. Nomura Y., Rajendran K., Brooks-Gunn J., Newcorn JH. Roles of perinatal problems on adolescent antisocial behaviors among children born after 33 completed weeks: a prospective investigation. J Child Psychol Psychiatry 2008; 49:1108–1117
6. Saigal S, Pinelli J, Hoult L, Kim MM, Boyle M. Psychopathology and social competencies of adolescents who were extremely low birth weight. Pediatrics 2003;111:969–975

Ethics and Integrity in Science

by: Dr. Kathleen S. Matthews

(You may also see the full spread of this article in a PDF.)

The nature of science is cumulative, building upon past discovery to open new insights into the natural world. This unique character of the research enterprise demands a high level of trust in the knowledge base upon which the next steps are constructed. As a consequence, the expectation of ethical behavior in the scientific community is very high. If the foundation for an experiment or theory has been flawed by intention, significant effort and expense are wasted, and careers can be compromised. This need for confidence in the work of others and awareness that intentional flaws or deceit are necessarily uncovered in attempts to replicate or expand research findings come together to drive a high level of ethical conduct in the scientific community.

Each individual scientist inevitably brings a set of perceptions with the potential to bias theory/experimental design or even interpretation, but the multiplicity of critiques that any discovery must undergo — from laboratory group discussions to peer review — ensure a wide variety of corrective inputs. Intentional violations of the expectations of accuracy in gathering, reporting, and interpreting data or formulating theories are effectively anti-science. Indeed, the fundamental importance of behavior that aligns with the expectations of the profession is reflected in the institution of systems within federal funding agencies to promote research integrity and, when necessary, to investigate lapses in ethical conduct of research. The Office of Research Integrity at NIH, http://ori.hhs.gov/, provides links to the research misconduct policies for many federal agencies. The emphasis on this issue is also seen in the requirement for instruction in the ethical conduct of research — from how to collect and analyze data and report results to treatment of animal and human subjects — associated with training grants from federal sources.

The challenges become even more complex for research involving human subjects. For example, “informed consent” is required for participation in clinical research trials, and a widespread view is that this consent is sufficient to render clinical research ethical. However, others have argued that “informed consent is neither necessary nor sufficient” [1] to establish an ethical base for research involving humans and that a larger set of questions must be addressed before individuals should be enlisted for clinical studies. Ethical lapses of commission or omission in clinical studies have the potential for dire consequences to the participants.

Beyond the larger issues, including fabrication or falsification of data, plagiarism, and other types of misconduct, ethical questions reach deeply into the scientific process and have been addressed from many perspectives [e.g., 2]. One of the most important concerns regards management of data. Interestingly, the majority of research misconduct findings in the federal review processes involve data falsification and fabrication, some simply as the result of poor data recording and management. For this reason, data management practices are emphasized in training scientists. How data are collected and recorded determine their reliability, and undergraduate teaching laboratories make significant efforts to guide students in developing good habits for the future. Data interpretation is particularly important — all data collected should be utilized in framing interpretations and developing hypotheses. Thus, data “selection” is not acceptable, and data inconsistent with the hypothesis guiding the experiments should never be withheld or ignored.

An area of particular concern in the modern computer age is the capacity to manipulate images and datasets, or even fabricate them to one’s own end. This area is of particular concern as the visual images and data from a variety of sources are digitally recorded and maintained. The conclusions from such fabricated or distorted data can be particularly destructive, both undermining others working in the same area and eroding confidence in the scientific enterprise itself.

The impact of scientific misconduct is reflected in its own entry in Wikipedia (http://en.wikipedia.org/wiki/Scientific_ misconduct), with a section that details this subject further and provides links to renditions of high profile examples of research misconduct. Recent examples include a prominent scientist from Seoul National University (Korea), who was indicted for embezzlement and for fraud based on fabricated data in reports on generating human embryonic stem cells; a research scientist from Bell Labs, who was fired and his doctoral degree revoked by the University of Konstanz for using falsified data in publications reporting single-molecule semi-conductor technology; and a professor at the University of Vermont, who, in 2006, was the first academic ordered to serve federal prison time for falsifying data in a research grant application to the NIH and was also found to have fabricated data in ten different published papers. As is evident, the consequences to the perpetrators, as well as to those attempting to replicate fabricated work, can be disastrous.

The ethical responsibilities of scientists encompass not only exercising a high level of integrity in our work, but also include understanding the social and ethical implications of our research [3]. Sharing our excitement in understanding the world/universe and enticing others to engage in that process are part of helping embed science as a part of our society. In particular, scientists are important players in making new policy, and finding a path that introduces our knowledge base into the complexities of political decision-making must be placed in an ethical framework. Roger Pielke [4] has suggested a variety of ways in which scientists can engage in sharing their experience and insight in creative, ethical, and useful ways. Rice University’s Baker Institute for Public Policy (BIPP) has been active in inviting scientists to share their expertise and in providing a variety of venues for reflection and discussion. Students interested in becoming involved can engage through the student arm of BIPP or contact Kirstin Matthews (krwm [AT] rice dot edu).

Ethical conduct is important in all aspects of our lives, but the very nature of science demands a high level of integrity. Indeed, the essential process of science with its accumulation of knowledge cannot function without honest disclosure. We rely on the honor of our colleagues and their scrupulous efforts to report their observations accurately. Without this element, the structure that we are building will be flawed and eventually collapse. That is not to say that scientists interpret all data correctly or that unintentional mistakes do not occur, but the primary intent required of the scientist is to report what is observed as honestly and completely as possible. What scientists have interpreted incorrectly will be discovered and set right as an integral part of the larger collective process. The necessity for correction because of intentional deceit corrosively undermines the scientific community.

The excitement of science is found in the effort to answer current questions about how things work and then pose new questions unanticipated by present knowledge. Since we often do not recognize what we do not know, a single breakthrough can open unimagined new horizons and render significant change, both in science and society. The discoveries of restriction enzymes, the magnetic properties of atomic nuclei, or Buckyballs (right here at Rice) are examples that have opened whole new territories for investigation and evoked substantial societal change. For example, without our knowledge of the magnetic properties of nuclei, invasive surgery rather than simple MRI scans would be needed to “see” what is happening inside our bodies.

The on-going scientific process of opening and exploring new territory at the frontiers of knowledge is highly robust because the discovery of error — intentional or otherwise — is inherent in the system. Most scientists exercise a high degree of integrity precisely because the journey beyond the far horizons of our current understanding is exhilarating and we wish nothing to impede our pioneering expeditions into the unknown!

Dr. Kathleen S. Matthews is the former Dean of Natural Sciences and a Stewart Memorial professor of Biochemistry & Cell Biology.

References

1. Emanuel, E.J., Wendler, D., Grady, C., What makes clinical research ethical? Journal of the American Medical Association 283, 2701-2711 (2000).
2. Montgomerie, B., and Birkhead, T., A beginner’s guide to scientific misconduct, ISBE Newsletter 17, 16-24 (2005).
3. Beckwith, J., and Huang, F., Should we make a fuss? A case for social responsibility in science, Nature Biotechnology 23, 1479-1480 (2005).
4. Pielke, R.A. Jr., The honest broker — Making sense of science in policy and politics, Cambridge University Press, Cambridge, UK (2007).

Tissue-Engineered Articular Cartilage

by: Vasudha Mantravadi, ’12

(You may also see the full spread of this article in a PDF.)

A star football player watches his knee buckle backward, hears an excruciating “pop,” and realizes his worst fear has come true – a meniscal tear. A 65-year-old woman rises from her chair to sense a stiffness in her ankles that forces her to limp, reminding her that her osteoarthritis is growing worse day by day. Despite their broad age gap, these two are victims of similar afflictions that are caused, at least in part, by damage to cartilage.

Cartilage is a compact connective tissue whose chemical and physical properties allow it to offer support and facilitate smooth movement in skeletal joints. It is comprised of cells known as chondrocytes and a surrounding extracellular matrix (ECM) which contains various proteins and other structural components. Together, these extracellular components form a gel-like mesh that gives cartilage its mechanical strength. However, if cartilage is damaged, chondrocytes cannot regenerate it, not only because they are too few, but also because they lack access to blood and nutrients [1]. This calls for a novel tissue engineering approach that allows us to create cartilage to insert into people’s joints that have suffered this kind of damage.

Many labs have worked towards developing such an approach, but in 2006, Rice Professor of Bioengineering Dr. Kyriacos Athanasiou and his group, most notably Dr. Jerry Hu, pioneered an unique self-assembly method of growing cartilage. This method involved seeding a high density of chondrocytes in agarose wells, which served as three-dimensional “molds” for the tissue constructs to form. Unlike most previous attempts, this new method did not rely on the construction of “templates” known as scaffolds to direct tissue formation [2]. The Athanasiou lab has continued to use the self-assembly method in creating articular cartilage (the smooth tissue lining the ends of bones at moveable joints) and in analyzing biochemical and mechanical characteristics during tissue development. They have been able to modify this technique to closely mirror the developmental stages of native articular cartilage and even to identify key points for effective manipulation of the tissue growth.

One of their research projects has focused on the ECM development during cartilage self-assembly and the specific trends in matrix components such as various collagen types, glucose and glycosaminoglycan (GAG) types, and N-cadherin. By relating these observations with the results of mechanical testing, Dr. Athanasiou’s team has identified structure-function relationships in the developmental process that make articular cartilage the strong material it is [3].

To create the cartilage tissue constructs, the lab members first prepared a mixture of bovine articular chondrocytes in a culture medium and inserted it into agarose wells. From this point onward, they conducted the experiment in two phases over a total of 8 weeks, collecting both qualitative and quantitative data on the ECM components. For example, immunofluorescent staining, which involves labeling specific molecules and structures using antibodies, was used to map the general occurrence of N-cadherin and collagen types I, II, and VI. Safranin-O staining was used to label GAG. Shifting to a more specific analysis, the team collected biochemical data to find the net amounts of GAG, glucose, collagen, and individual collagen types contained in each construct. They found that overall GAG concentration increases during the growth of the constructs, but a certain type of GAG known as chondroitin 4-sulfate (CS-4) increases at a faster rate than the related chondroitin 6-sulfate (CS-6). Collagen type II increases and spreads throughout the tissue whereas collagen type VI decreases and ultimately remains in the region immediately outside the cells. N-cadherin, the protein responsible for intercellular adhesion of chondrocytes (and therefore crucial to the initial steps of the self-assembly process), has an increased presence only for a short time during the early stages of development.

The tissue samples from the second phase were exposed to mechanical stresses, such as tension and compression. These tests were then correlated with the biochemical data to reveal some interesting relationships between structural composition and function. For example, the growing trend in GAG levels corresponded with increasing compressive stiffness. The net amount of collagen as well as the collagen type present in the construct determined its tensile strength.

Another important finding in this experiment involved identifying the point at which significant changes in localization of collagen and GAGs occurred, which was around 4 weeks. The lab members believe this may be an effective time to introduce stimuli that will influence the overall development of the constructs, allowing them to enhance the self-assembly process and achieve one of the main goals of this research [3]. This project was aimed mainly at gaining a greater understanding of the self-assembly process in articular cartilage, but future projects will go even further by using this knowledge to modify that process–to really engineer it.

According to one of the lab members, MD/PhD student Sriram Eleswarapu, one of the major problems facing tissue engineering is “making a three-dimensional tissue that is mechanically stable – this is an engineering feat in itself.” However, once the self-assembly process is improved, this feat will come closer to real clinical application. At the same time, the discovery of alternative cell sources such as embryonic, mesenchymal, and induced pluripotent stem cells, each of which are studied by the Athanasiou group, will eliminate the need to extract cells from individuals, making it more feasible to transplant them into patients with damaged cartilage.

Vasudha Mantravadi is a freshman majoring in Bioengineering at Jones College.

References

1. “Creating a More Natural Engineered Cartilage.” http://www.arthritis.org/natural-engineered-cartilage.php (accessed 10/13/08)
2. Jade Boyd. “Rice bioengineers pioneer method to grow replacement cartilage.” http://www.media.rice.edu/media/NewsBot.asp?MODE=VIEW&ID=8375 (accessed 10/14/08)
3. Ofek G, Revell CM, Hu JC, Allison DD, Grande-Allen KJ, et al. (2008) Matrix Development in Self-Assembly of Articular Cartilage. PLoS ONE 3(7): e2795. doi:10.1371/journal.pone.0002795

The Living Genome

by: Tina Munjal, ’12

(You may also see the full spread of this article in a PDF.)

The genome is indeed a curious creature. You may find it strange for a set of assorted strings of nucleotides to be referred to as a “creature,” because the implication would be that this collection of macromolecules is somehow alive and capable of self-preservation. Indeed, this is precisely the portrait that author Matt Ridley paints in his novel entitled Genome: The Autobiography of a Species in 23 Chapters.

Evidence for the idea of genomic autonomy can be found in the very origins of life. Ridley ponders these beginnings when he says, “It now seems probable that the first gene, the ‘ur-gene,’ was a combined replicator-catalyst, a word that consumed the chemicals around it to duplicate itself.” Thus, this gene served no higher purpose than to ensure its own survival and propagation. The ur-gene was not a subservient sequence.

If the initial purpose of genes was to attend to themselves, how is it that DNA has, over time, become an instrument of the organisms in which it resides? Or has it? As Ridley explains, it may be quite possible that the genome remains the omniscient executive, and the organism is but its compliant agent. The end goal of the genome-organism system is the proliferation of the genetic material itself, not of the “essence” of the organism in any platonic sense.

The most fundamental display of this phenomenon occurs during bacterial conjugation, in which each bacterium is a “temporary chariot” for the genes it carries. The transfer of genetic material can then be compared to the formation of “transient alliances” between the involved parties. Ridley proposes that, over time, genes “found a way to delegate their ambitions, by building bodies capable not just of survival, but of intelligent behavior as well. Now, if a gene found itself in an animal threatened by winter storms, it could rely on its body to do something clever like migrate south or build itself a shelter.” The author’s choice of words leaves no doubt that the genes, although relatively flexible masters, do indeed hold the reins over the actions of the organism to ensure that they are in agreement with—not in opposition to—the survival of the genetic message.

Extending this notion of genes as active doers rather than passive coders, Ridley offers a plausible explanation for the presence of so-called “junk DNA” in the genome. He likens these seemingly useless sequences, such as retrotransposons, to parasites that, at some point in the distant past, invaded the endogenous DNA. Just as there is war among nations to exert and expand influence, there is “competition between genes using individuals and occasionally societies as their temporary vehicles.” DNA is therefore much like a battle zone in constant evolutionary flux.

Ridley’s novel discussion about the relationship between genome and organism is at once fascinating and humbling. However, it would be insufficient to state that all of the actions of an organism, especially a human, lead so imposingly and inevitably, to nothing more than the proliferation of a collection of genetic material. Further, the image of the genome as the driver of some insentient chariot is not enough to describe the intricate exchanges that occur between genes, mind, body, and environment. Ridley does recognize this fact and even likens the fragile interaction to the workings of a free market. Just as there is no centralized authority in command of making the economy’s decisions, “You are not a brain running a body by switching on hormones. Nor are you a body running a genome by switching on hormone receptors. Nor are you a genome running a brain by switching on genes that switch on hormones. You are all of these at once.” The organism, then, possesses a variety of hierarchical systems that work simultaneously to form a functional whole.

This idea of shared authority smoothly propels Ridley’s discussion into the concluding philosophical treatise on determinism versus free will. On the one hand, there are those who would argue that genes are not what foster the development of behavior and personality—this view would be entirely too deterministic. Instead, proponents of the alternative hypothesis believe that the environment, not an intrinsic genetic factor, shapes individuals. However, as Ridley asks, is this not even more deterministic? He cites the example of Aldous Huxley’s frightening novel, Brave New World, in which “alphas and epsilons are not bred, but are produced by chemical adjustment in artificial wombs followed by Pavlovian conditioning and brainwashing…” In this world, genes factor very little into the equation, and the hellish, deterministic society is created entirely by environmental manipulations. So the essential question remains: do we have conscious control over our fates? The dilemma is not a new one. As Ridley points out, the philosopher David Hume summarized the problem through Hume’s fork, which states, “Either our actions are determined, in which case we are not responsible for them, or they are random, in which case we are not responsible for them.”

Perhaps the best answer comes from chaos theory, which states that a general course of events can be predicted with some certainty, while the finer details of this course remain unknown. As Ridley explains, due to various reasons, an individual chooses whether to eat a particular meal and the time at which the meal is taken. It can be said with confidence, however, that the individual will eventually eat at some point in the day. Eloquently, Ridley begins to close, “This interaction of genetic and external influences makes my behaviour unpredictable, but not undetermined. In the gap between those words lies freedom.” There is indeed flexibility within determinism, and consciousness alongside instinct.
For all of its impossible questions and imaginative answers, Matt Ridley’s book on the self-replicating, autobiographical, living, changing genome is worth its place on any genome-holder’s bookshelf.

Tina Munjal is a freshman double-majoring in Biochemistry & Cell Biology and Cognitive Sciences at Wiess College.

Reference

1. Ridley, Matt. Genome: The Autobiography of a Species in 23 Chapters; Harper Perennial: New York, 1999.

Synthetic Biology: Engineering Biological Systems

by: David Ouyang and Dr. Jonathan Silberg

(You may also see the full spread of this article in a PDF.)

Abstract

Recent advancements in molecular biology and biochemistry allow for a new field of bioengineering known as synthetic biology. Using biological parts discovered in the last thirty years and mathematical models grounded in physical principles, synthetic biology seeks to create biological systems with user-defined behaviors. The major focus of research in this emerging field is the characterization of genetic regulation and the abstraction of biological systems to clearly defined logic circuits. With the abstraction of individual DNA sequences to known biological functions, synthetic biologists seek to create a standard list of interchangeable biological parts as the foundation of this emerging field. Through genetic manipulation, these parts are expected to be useful for programming biological machines that process information, synthesize chemicals, and fabricate complex biomaterials that improve our quality of life.

Genomic Era and Tools of the Trade

On June 26, 2000, President Bill Clinton and Prime Minister Tony Blair, along with Francis Collins, director of the Human Genome Project at the NIH, and Craig Venter, president of Celera Genomics, announced the arrival of the genomic era with the sequencing of the first draft sequence of the human genome. With this wealth of information, scientists and policy-makers alike were eager to welcome in the genomic era of genetics. Doctors dreamed of personalized medicine, where genomic information can be used to diagnose individual predispositions to cancer and disease. Politicians pondered the implications of genetic profiling, where insurance companies can potentially use genetic information to screen policyholders. The genomic era is bright with promise and unprecedented potential but also rife with social implications and practical applications.

While a sequenced genome provides a boon of new information and the scientific community is quick to emphasize the potential of this plethora of information, there are still many challenges in its interpretation and analysis. The interpretation of genomic data requires both high throughput techniques, such as microarray analysis, and heuristic algorithms in bioinformatics to analyze large amounts of data. Microarray analysis allows researchers to understand differential expression of many different proteins between different species, ages, and diseases states. With more than four billion base-pairs in the human genome and over thirty thousand open reading frames, the sheer size of the human genome requires the use of ad hoc analytical methods. The status quo approach in analyzing individual enzymes and molecules is complemented by a recent desire to understand entire systems, regulatory networks, and gene families. Exponentially increasing information on biological organisms and increasing computational power has broadened the perspective of current biological research.

Although genomic sequences provide insight into the enzymes that make up an organism, understanding of how these parts work together to produce complex phenotypes is the focus of current research. Understanding the regulation of gene expression and multicellular development will require a deeper analysis of how transcription and stability of mRNA is regulated in response to the environmental stimuli. Despite the age old debate between nature vs. nurture, it is the interplay of the environment and gene products that determine disease states and merge to create the fascinating output of life. Greater understanding of the regulation of gene products is required in determining their effects on physiology and development. Synthetic biology seeks to understand and apply understanding of biological regulation to tackle general problems.

Recombinant DNA technology laid the foundation for manipulation of biological systems on a molecular level, but recent advances in DNA sequencing and synthesis technology have greatly expanded the potential of biological engineering projects. The decreasing cost of oligonucleotide synthesis as well as improved techniques of combining oligonucleotides allows unparalleled flexibility in synthesizing long DNA sequences. From traditional methods of subcloning using restriction endonucleases and ligases to polymerase-based techniques such as gene Splicing by Overlap Extension (gene SOEing), researchers have unprecedented power in their ability to alter and characterize DNA. We can now identify new genes or regulatory sequences in diverse systems and recombine them into novel networks that attempt to recreate our understanding of existing biological systems. The rapidly expanding molecular biologist’s toolkit broadens the scope of manipulation to whole genetic systems instead of individual genes.

The current state of molecular biology has improved our understanding of the networks of biomolecular interactions that give rise to complex phenotypes and allows for unprecedented control of biological systems through clear characterization and synthetic techniques. Just as electrical engineering required increased aptness in manipulating individual circuits and transistors, biology is on the cusp of synthetic potential as new technologies overcome technical difficulties challenging previous generations of scientists.

Concept

Synthetic biology can be described as a hierarchy of fundamental biological concepts. From discrete genetic parts to whole biological circuits, each level of regulation builds upon a lower level of biological function for the ultimate goal of using biological systems to perform novel tasks or improving upon natural functions. Individual genetic parts, or particular DNA sequences with known functionality, can be integrated into genetic circuits. Genetic circuits, or new combinations of regulatory and coding sequences, can be created to produce unique behavior. Ultimately, these genetic circuits can be incorporated into biological organisms or systems.

Ongoing efforts in synthetic biology are focused on the creation of reusable, modular fragments with clearly characterized behavior and functionality in biological systems. With the discovery of the lac operon, biologists recognized the possibility for digital, discrete outputs within biological systems. Detecting the presence of lactose, the LacI repressor recognizes and binds to particular DNA sequences upstream of coding regions, regulating the transcription of the gene products in an all-or-none fashion. With clearly characterized behavior, the LacI repressor is already widely used in biotechnology applications, such as PET vectors, as integral parts of simple genetic circuits. As the biological analog of electronic circuits, researchers hope to use a growing repertoire of genetic parts to mimic logic functionalities and produce complex output.

The basic premise of synthetic biology is the ability to characterize and categorize a database of biological parts. A prominent example of this concept is the Registry of Standard Biological Parts (http://partsregistry.org/Main_Page) maintained by the BioBrick Foundation. Drawing upon the analogy of Lego bricks, synthetic biologists hope to use a standardized list of biological parts, ‘BioBricks’, to build large constructs with novel activity and unprecedented functionality. With defined activities for each component and a standardized subcloning method for combining DNA sequences, synthetic biologists hope to easily integrate ‘BioBricks’ to create novel biological circuits in a process analogous to the way computer scientists program computers. A database of DNA sequences, the Registry details the specific activity of individual sequences, the original sources and additional information necessary for synthetic biologists to integrate biological parts into particular biological systems.

Genetic Parts

There are three distinct levels where biological information can be regulated. In biological systems, information moves from DNA to RNA to protein. First proposed by Francis Crick in 1958, this “central dogma of molecular biology” addresses the detailed residue-by-residue transfer of sequential information. Synthetic biology utilizes regulatory elements at each level of this basic concept to create novel biological machines. On the DNA level, current understanding of genetic regulation reveals a complex system of promoters and terminators regulating transcription. The Registry contains a wide collection of parts for regulating transcription and translation, such as constitutive, inducible and repressible promoters, operator sequences, and ribosomal binding sites. Promoters, the 5’ upstream DNA sequencing before coding regions, determine the amount, duration, and timing of the translation. In the Registry, there is a large catalogue of terminator sequences. The 3’ region after coding regions, which form hairpin loops at the end of mRNA transcripts, cause RNA polymerase to dissociate from the template strand and end transcription. This compilation of a wide body of knowledge and literature about genetic regulation chronicles the behavior of many DNA sequences found in native systems.

Knowledge of regulation on the RNA level is applied to synthetic biology and builds upon a deep understanding of the regulation of protein production through mRNA stability and translation efficiency. Native systems display a wide range of RNA regulation that help modulate where and when particular proteins are translated. Transcribed RNA has the unique characteristic of being able to form diverse forms of secondary structure. Hairpin looping, which is intramolecular basepairing of palindromic sequences of RNA, is the basis of RNA secondary structure and can be used to create complex three dimensional structures. With the additional complexity of secondary structures, engineered RNAs function in RNA interference, as riboregulators, and catalyze key reactions. These RNA structures have been shown to mediate ligand binding and show temperature dependent activity. With temperature mediated stability, RNA sequences with designed hairpin loops can function as biological thermometers, regulating temperature sensitive expression.

The use of riboregulators is a prominent example of applying understanding of RNA behavior to regulate the expression of gene products in biological systems. Collins et al designed a system of RNA molecules that requires cooperative function of multiple RNA molecules for translation to occur. An mRNA transcript has an additional 5’ sequence complementary to the ribosomal binding site, prevent binding to the ribosome from binding to and translating the gene product. This ‘lock’ sequence can be unlocked by the regulated production of another mRNA molecule with similar homology and tighter binding affinity; allowing translation. Riboregulators with ligand mediated activity can bind to specific mRNA transcripts, silencing translation of particular genes as the result of exogenous stimuli. Both as sensors of environmental stimuli and in mediating translation, RNA has a distinct regulatory role allowing for programmable cellular behavior.

Genetic Circuits

In synthetic biology, identified regulatory components are recombined into novel networks that behave in predictable ways. An early example of a genetic circuit is the AND gate. Mimicking the functionality of digital logic of AND gates in which two unique inputs must combine to produce a positive output, Arkin and coworkers designed and modeled a genetic part to synthesize a marker protein in the presence of both salicylate and arabinose. Salicylate and arabinose are two naturally occuring, freely diffusible metabolites that bacteria normally react to; this proof of principle construct showed the ability to produce a novel reaction to simultaneous induction of both metabolites. Using two inducible promoters (NahR induced by salicylate and AraC induced by arabinose), this particular genetic part transcribed a unique T7 polymerase and the SupD amber suppressor terminator. The SupD tRNA allows translational read through at the amber stop codon, while the mutant T7 polymerase transcript includes two internal amber codons. Without the transcription of the SupD tRNA, the mutant T7 polymerase transcript would only create a nonfunctional protein product, while the SupD itself cannot induce transcription after the T7 promoter. With the combination of both gene products, a functional T7 polymerase can be expressed, which will synthesize any gene products behind the T7 promoter.

An ultimate goal of genetic manipulation is the creation of unique genetic devices or systems that can display unique characteristics or output not found in natural systems. An example of such a biological device is the repressilator, a biological device emulating the functionality of a digital oscillator which oscillates in its production of three different protein products. A system of inter-regulating gene products, the repressilator allows for sequential expression of three individual elements. Mimicking time dependent processes commonly found natural organisms, such as the KaiABC system and the circadian rhythm in photosynthetic organisms, this genetic circuit indicates the ability of simple DNA sequences to produce complex behaviors. Although this proof-of-concept constructare not as robust as natural systems, this biological device demonstrates the potential of deliberate genetic engineering to create novel output and emulate natural organisms.

A LacI repressible promoter regulates a tn10 transposon gene product which can repress another tn10 transposon promoter. This pTet promoter regulates the cI gene. A regulatory unit originally found in lambda phage, the cI protein regulates a lambda promoter that natively regulates switching between lytic and lysogenic phages in the lambda phage lifecycle. In a time dependent manner, the repressilator mimics the circadian clock found in most eukaryotic and many prokaryotic organisms.

Applications and Conclusions

In the last one hundred years, electrical systems have changed the face of the earth. Since the invention of the transistors, computers, phones, and other electronic systems have encroached upon all aspects of daily life. One can barely go through one day without use of e-mails, televisions, or cameras. Synthetic biologists dream of another world-changing revolution. Through modular parts and deliberate design, synthetic biology hopes to design biological systems to tackle challenging problems. From smart, self-regulating treatments for cancer to new solutions to the global energy crisis, the ability to engineer biological organisms has the potential to address many status quo questions. The vast natural diversity of life is a testament to the potential and opportunities available in synthetic biology.

Many different native biological organisms, such as E. coli and S. cerevesiae, are already used in many pharmaceutical and biotechnology applications. With a goal of standardization and optimization, synthetic biology allows for novel possibilities as well as improvement upon existing engineered systems. Regulating the interaction of bacteria, bacteriophage, and mammalian cells can allow for applications in medical diagnosis and treatment. The feasibility of using bacteria in biofabrication and energy generation requires designed logic functions in biological systems and biological computation. One interesting area of investigation is the removal of non-essential genes from the genome E. coli to produce an idealized minimal cell. With less chance of interfering regulatory sequences and gene products, such a minimal “cell chassis” could be the optimal shuttles for synthetic gene networks. A simplification of the cellular environment allows for greater ease in characterizing and modeling biomolecular interactions.

Utilizing the modularity of many biological systems, researchers hope to eventually produce complex behaviors through the simple combination of different biological parts. However, important considerations and research into the modularity of biological parts must still be made. In an idealized world, biological parts and coding regions could work equally well in all different cell types and organisms. Unfortunately, due to the inherent complexity of cells and intrinsically noisy nature of molecular systems, different modules might not work in different cellular environments or might not be optimized for maximum efficacy. The stochastic nature of biochemical interactions requires more work to build synthetic models and thereby understand both natural biological systems.

As genomic sequencing costs continue to decrease, the number of characterized native biological parts and unique designed parts will increase exponentially. Ultimately, synthetic biology introduces novel biological architectures not present in nature. As synthetic biology seeks to stretch the boundary of biological limits and go beyond what currently exists, questions of ethics and morality need to be addressed. What should be the limitations of investigation in this powerful field? With projects like the Venter Synthetic Genome Project, will the threshold between aggregates of molecules and life be more blurred? Should there be manipulation of the human genome, both for medicinal treatments as well as non-life threatening situations? How will intellectual property be handled, as the objects in question are inherent in natural systems? This author does not have the answers to these difficult questions, but feels that one needs to balance the potential benefits with the putative risks in this potent area of research. With great power comes great responsibility; a critical and diligent eye must be maintained in this area of active research. In addition to advancing current knowledge, it is the responsibility of the scientific community to educate the public to the potential and the risks of synthetic biology. Both to prevent Luddite reactions and to address legitimate concerns, dialogue and education are required of a field that seeks to make broad impacts on society at large.

Applying the tools and understanding of molecular biology and biochemistry, synthetic biology focuses on using current molecular tools to engineer unique biological parts and systems. Through such an engineering approach, synthetic biology also seeks to augment current approaches toward understanding regulation. Designed structures and sequences, not unique to natural systems, can be used to understand the finer details of regulation down to the very last nucleotide. As we continue to increase our knowledge of both prokaryotic and eukaryotic regulation, synthetic biologists continue to increase their repertoire of biological parts. Synthetic genome projects are currently underway and new applications such as biological computation, biological chemical fabrication, and disease treatments are being unveiled. Coupled with selection and refinement of genetic devices, deliberate genetic engineering has the potential to tackle many challenges in the near future.

David Ouyang is a sophomore Biochemistry & Cell Biology major at Baker College.

Acknowledgements

I would like to thank Dr. Jonathan Silberg for introducing me to this fascinating field of research and Dr. Daniel Wagner for his keen eye and constructive feedback. Their advice, encouragement, and support greatly aided in the writing process. I would also like to thank Erol Bakkalbasi for editing my paper.

References

1. PCR-mediated recombination and mutagenesis. SOEing together tailor-made genes. Mol Biotechnol. 1995 Apr;3(2):93-9. http://www.ncbi.nlm.nih.gov/ pubmed/7620981
2. Stephan C Schuster . Next-generation sequencing transforms today’s biology. Nature Methods – 5, 16 – 18 (2008) Published online: 19 December 2007; | doi:10.1038/ nmeth1156. http://www.nature.com/nmeth/journal/ v5/n1/full/nmeth1156.html
3. F. Narberhaus, T. Waldminghaus & S. Chowdhury. RNA thermometers. FEMS Microbiol Rev, 30(1):3-16, 2006. PMID:16438677
4. Farren J Isaacs, Daniel J Dwyer & James J Collins. RNA synthetic biology. Nature Biotechnology 24, 545 – 554 (2006) Published online: 5 May 2006; | doi:10.1038/nbt1208
5. Suess, B., Fink, B., Berens, C., Stentz, R. & Hillen, W. A theophylline responsive riboswitch based on helix slipping controls gene expression in vivo. Nucleic Acids. Res. 32, 1610–1614 (2004).
6. James J Collins et al. Engineered riboregulators enable post-transcriptional control of gene expression. Nature Biotechnology 22, 841 – 847 (2004) Published online: 20 June 2004; | doi:10.1038/nbt986
7. Poinar, H.N. et al. Science 311, 392–394 (2006).
8. Applications of next-generation sequencing technologies in functional genomics. Morozova O, Marra MA. Genomics. 2008 Nov;92(5):255-64. Epub 2008 Aug 24.
9. Molecular basis for temperature sensing by an RNA thermometer. The EMBO Journal (2006) 25, 2487–2497, doi:10.1038/sj.emboj.7601128 Published online 18 May 2006
10. A Synthetic Oscillatory Network of Transcriptional Regulators; Michael B. Elowitz and Stanislas Leibler; Nature. 2000 Jan 20;403(6767):335-8.
11. Synthetic biology: new engineering rules for an emerging discipline. Molecular Systems Biology 2 Article number: 2006.0028 doi:10.1038/msb4100073 Published online: 16 May 2006
12. Voigt C. Genetic parts to program bacteria. Current Opinion in Biotechnology Volume 17, Issue 5, October 2006, Pages 548-557 Systems biology / Tissue and cell engineering
13. “Guttmacher, Alan E., Collins, Francis S. Welcome to the Genomic Era. N Engl J Med 2003 349: 996-998
14. J Christopher Anderson, Christopher A Voigt, and Adam P Arkin. Environmental signal integration by a modular AND gate. Mol Syst Biol. 2007; 3: 133. Published online 2007 August

Immunization and Public Health

by: Trishna Narula, ’12

(You may also see the full spread of this article in a PDF.)

Memories of the summer before I matriculated at Rice would be incomplete without images of heaps and stacks of various forms waiting to be completed – one of which was a health record to be signed by my pediatrician. I decided it would be better to verify myself if I was up to date on my vaccines rather than receive an unanticipated prick at the doctor’s office. Thus, like hundreds of millions of people around the globe, I turned confidently to my omniscient and omnipresent companion Google to answer my queries. I entered the word “immunizations” and precisely 0.16 second later, I was rewarded with various reliable sites such as the CDC’s and the NIH’s. However, preceding this obliging list was the phrase: “related searches: against immunizations” glaring at me in underlined, bold font. Curious, I clicked it. Whatever could this mean? The search engine directed me back in time…

Once upon a time, bacteria and viruses dominated the world. Actually, even today there are more microbes on your hand than there are people on the planet! However, I’m talking more pathogenic bacteria and viruses. Their lives (or “lives” in the case of viruses, which are scientifically classified in the gray area between living and nonliving) consisted of finding an innocent animal or human cell which – once they had invited themselves in through endocytosis – they established residence in and pinched food and nutrients from. If for some reason things didn’t quite work out in their favor, they could easily pack up and drift to a more vulnerable host that would allow them to proliferate exponentially. These parasites were undoubtedly very happy until one fine day in 1796…

Little Miss Smallpox sat in a big ox, eating its host cells away. Along came Edward Jenner, and sat down beside her, and frightened the virus away.

Well, he had to put in a little more effort than just sitting next to her. Jenner successfully tricked the human immune system into believing that a small amount of injected cowpox virus was actually the related but much more lethal virus smallpox, hence inciting an immune response from the person and inoculating the person against smallpox in the future.

The term “vaccination” was thus coined from vacca, Latin for cow, and was initially used solely for poxvirus immunogen injections. Since then, however, vaccines for hepatitis A, hepatitis B, polio, mumps, measles, rubella, diphtheria, pertussis, tetanus, haemophilus influenzae, chicken pox, rotavirus, influenza, meningococcal disease, and pneumonia have been developed, and most recently, for human papillomavirus and shingles. (Ironically, since smallpox was eradicated in 1979, there is currently no vaccine in place for it).

Not only have these immunizations been developed, but they have also been widely implemented, resulting in one of the most successful public health endeavors in the history of mankind. In 2006, approximately 77% of 19 to 35-month-olds in America received all their recommended vaccinations, a record high rate for the nation.

Nevertheless, as Benjamin Dana said, “There has been opposition to every innovation in the history of man.” Microorganisms are once again gaining strength, and we the people are the ones abetting them. More and more individuals are beginning to believe that vaccines are perhaps more medically harmful than they are beneficial, and parents are indeed finding legal ways – such as religious or philosophical waivers – to opt their children out of the requirement.

Undeniably, one of the basic arguments is a fairly legitimate one: babies less than a day old, with feeble immune systems, are being infused with samples of potentially potent microorganisms. Simple reasoning seems to defy the probability of any profitable consequence of this seemingly dangerous practice. However, conducted empirical studies beg to differ. One particularly significant experiment in Denmark published in 2005 illustrates the lack of correlation between increased vaccine exposure and a higher rate of infections. In fact, common fevers in children are a greater threat in compromising the immune system than vaccines are.

A more specificand escalatingly popular issue is the controversy created by claims that the MMR (measles-mumps-rubella) vaccine causes autism. Murmurs of this hypothetical link arose when the rate of autism diagnoses began to climb during the 1980s and 1990s, at the same time that the MMR vaccine was being introduced in Britain and its use mounting in the United States. However, the last straw was an article in The Lancet authored by Andrew Wakefield, M.D., in 1998, that implicitly proposed this connection.

Wakefield’s research was, from the beginning, generally refuted by the scientific and medical community, including the overwhelming majority of the supporting writers of the original article. In addition, a couple of likely coincidences that would skew data must be taken into account. First, though diagnoses of autism have undoubtedly increased dramatically, the phenomenon may very well be attributed to heightened awareness of the disease, leading to better detection or even a wider definition of the same. Alternative explanations include environmental and genetic anomalies. Secondly, the mere temporal similarity of the typical onset of autism and the usual administration of the MMR may have been easily disguised as a link between the two. Indeed, later studies clearly display the absence of an association.

Nevertheless, the damage had been done to the public psyche, and the media further conditioned the concept to an immense extent. Immunizations rates already started dropping. Moreover, there was another factor fueling the pandemonium…

From the 1930s until the turn of the century, thimerosal, or sodium ethylmercurithiosalicylate, was used as a preservative in several vaccines (the MMR not being one, ironically) to thwart contamination by bacteria or fungi. Sounds all fine and dandy, right? Wrong. Take a look at that chemical name again… thimerosal is composed of nearly half mercury by weight. Excessive mercury is capable of incurring substantial brain damage, especially in children, whose actively growing and developing brains are most vulnerable. Although it was soon realized that thimerosal degrades to ethyl mercury rather than the more known and more toxic methyl mercury, the situation remained ambiguous enough for thimerosal to be removed from all required vaccines. By 2001, new vaccines were incorporating various doses in one vial. Though more expensive, these do not contain thimerosal.

Despite this metamorphosis, autism rates have still been on the rise. Indeed, a committee formed by the CDC and WHO has time and time again in the past decade concluded that there is no scientific evidence to support the link between thimerosal and autism.

Evidently, a major shortcoming in our society is the lack of accurate awareness of the above topics. Even if ones goes back to a straightforward cost-to-benefit ratio, the majority of today’s generations hasn’t witnessed any epidemics; thus, they are not able to rationally perceive the full extent of such widespread disease. These individuals find it difficult to grasp the fact that although vaccinations have reduced so many diseases nearly to the point of extinction, diminishing the coverage can easily bring them back to life in full blast.

Unfortunately, a rather large handful of parents seem to be thinking along the lines of, “Even if my kid doesn’t get vaccinated, all the other kids around him will be, so it will shield him anyway.” But if the other kids’ parents catch on to the same free-rider idea, the insulation bubble will quickly begin to disappear, failing to protect even those who genuinely have no other choice – for example, cancer patients with very weak immunity or individuals whose vaccinations were not 100% effective.

The take-home message here is rather transparent: vaccinations are essential. As a rule of thumb, the benefits outweigh the risks (until we can genetically screen people to predict their individual reactions to a vaccine) since a link between vaccines and autism has not been proven. To label a number on that fact, the CDC says that childhood immunizations save 33,000 lives in the U.S. alone annually. Most importantly, we must realize that everyone is on the same side – parents, doctors, policymakers – in the battle against disease and for human life.

Trishna Narula is a freshman double-majoring in Biochemistry & Cell Biology and Psychology at Will Rice College.

References

1. http://www.microbeworld.org/microbes/
2. http://www.cbsnews.com/stories/2007/08/30/health/webmd/main3221902.shtml
3. http://jama.ama-assn.org/cgi/content/abstract/294/6/699
4. http://www.thelancet.com/journals/lancet/article/PIIS0140673697110960/fulltext
5. http://www.dukehealth.org/HealthLibrary/AdviceFromDoctors/YourChildsHealth/mmr_vaccine_and_autism
6. http://www.plosone.org/article/info:doi/10.1371/journal.pone.0003140
7. http://www.niaid.nih.gov/factsheets/thimerosal.htm
8. http://www.fda.gov/cber/vaccine/thimfaq.htm
9. http://www.who.int/vaccine_safety/topics/thiomersal/en/index.html
10. http://blog.roodo.com/trac_mak/43977335.pdf
11. http://www.reuters.com/article/latestCrisis/idUSN04273085

It’s a Coyote Eat Deer Feed Tick World: A Deterministic Model of Predator-Prey Interaction in the Northeast

by: Orianna DeMasi and Kathy Li

For the full article complete with figures, please see the pdf of this article from the magazine.

Abstract

Occurrences of Lyme Disease have dramatically increased since the disease was named in the 1970’s. Much research now focuses on controlling Lyme through ticks (the vector for the disease), or White Tailed Deer (the host for ticks). Deer have recently reached surprisingly high numbers in the Northeast and it is thought that reducing deer populations will effectively decrease tick populations and thus the threat of Lyme. Consequently many towns have considered implementing ambitious yet controversial deer culling programs. As an alternative, we look at the potential for coyotes to biologically control deer populations through predation.

Coyotes, who prey on deer, have recently migrated into the Northeast from the Plains and may have been attracted to their new territory by the abundant prey supply. It is questioned whether the coyotes will act to replace the wolf as a natural control on deer population, thereby reducing Lyme Disease.

We construct a deterministic model to represent the current deer and coyote population dynamics and use this model to investigate the long term interaction of coyotes and deer. We explore the potential for coyotes to either solely act as a biological control on the deer populations or aid deer culling programs. The model is explored analytically and numerically and predicts that significant human intervention is needed to successfully control deer. Thus numerical simulations of the model and possible culling programs are provided to help highlight the system dynamics and guide culling policies.

Introduction

Recently the northeastern region of the United States has suffered an explosion of both White-Tailed Deer and cases of Lyme Disease. These two explosions are not considered to be independent and both issues greatly concern residents and policy makers of the area.

It is thought the deer explosion is a result of an increase in human population density. As humans developed the area, they created more disturbed landscapes that favor the growth of grasses, a major deer food source. Human development also conflicted with large carnivores and resulted in humans effectively killing wolves and natural ungulate predators in the region. Without predation, deer populations burgeoned in the last century [5]. Populations have gotten so large that deer have been reported in some areas at population densities as high as ten times what is thought to be healthy deer densities (20 deer per square mile vs. 200 deer per square mile) [5, 22]. Having such dense deer populations is dangerous; it leads to increased deer-car collisions [11, 24], over-grazing that can destroy natural forests, and endangerment of indigenous species [15, 4]. Possibly the greatest danger of a dense deer population, (though), is that it increases the risk of contracting Lyme disease [11, 26, 25, 22].
Lyme Disease is caused by at least three spirochetal bacteria in the genus Borrealia, but most commonly by Borrealia burgforferi. The spirochete is transmitted to individuals by ticks of the genus Ixodes. When a tick bites a host to feed on their blood, the tick transmits the spirochete to the host through its saliva and thus infects the host. Because adult Ixodes ticks frequently feed on the blood of White Tailed Deer, it is thought that more deer mean more tick and thus more Lyme. Municipalities have thus begun to explore the possibility of protecting their citizens from Lyme disease and dangers of deer overpopulation with deer control programs [5]. These programs usually propose periodically culling a portion of the deer population. Such programs have many drawbacks.

Culling is expensive and diffiult to practice as annual surveys are needed to count and monitor deer populations. Further difficulties arise with residents as the majority of land is privately owned and not all citizens morally agree with killing deer or bringing the dangers of hunting close to their homes.

We want to look for another way to potentially control deer without having to cull seasonally. We look for a biological control in the area that could act to control deer in adequate time. Specifically, we investigate if the effect of the emergent Eastern Coyote would be enough to act as this biological control.

In addition to allowing explosive deer growth, the extinction of wolves in the region has opened a niche for larger carnivores. As the emergence of a resident coyote population has occurred simultaneously, some suggest the coyote, which preys on ungulates, may be filling the wolf niche [14, 2]. Coyotes indigenous to the Great Plains and Southwest United States began migrating east and have successfully established populations in areas as far as New Brunswick and Nova Scotia [17, 2]. The eastern migration of the coyote has happened rather quickly and is thought to have resulted in a population of coyotes that is drastically different from the western coyote. The eastern coyote is physically larger [23, 13, 18, 8, 12] and has a larger home range than its western counterpart [27]. It is unknown why coyotes have moved so quickly to the Northeast, but it is believed that coyotes were attracted by abundant prey [23].

Very little is known about New England’s newest resident carnivore. It is necessary to learn about coyotes in the Northeast, as their new habitats of suburbia and New England deciduous forest differ greatly from their original home of expansive plains or the arid Southwest and puts coyotes in close proximity to humans. However, mammals with large home ranges are incredibly diffcult to track and study, so other methods of study are needed to investigate and understand coyotes on a broad scale in the Northeast [9]. These studies are especially needed if coyotes present a possible predator for deer and municipalities are looking for a method to control deer. Currently there seems to be no literature that looks at potential impact from the growth of the coyote population on Lyme. Further it appears no research has looked at exploiting coyotes as a new natural predator and control on deer populations.

We hope that employing a mathematical model might be a good way to study coyote dynamics and overcome the problems of tracking coyotes. We hope to gain insight into whether coyotes will have a significant impact on bringing deer down to acceptable densities and if the coyotes can achieve this feat within some reasonable time frame. As humans have a low tolerance of coyotes, we are also interested in if the coyotes can accomplish deer reduction with low coyote densities. Since we find mathematically that coyotes alone will not successfully control deer, we want to investigate the different types, impacts, and efficiencies of culling programs that towns might pursue. We hope to find a program that will allow us to suggest a minimally invasive cull to minimize expenses by only culling the minimum number of deer.

Modeling

Preliminary Assumptions and Notations
For simplicity, we ignore spatial variation and focus on a one square mile spatially homogeneous closed system. While this prevents modeling the varied distribution of fauna across the landscape, it will allow us to make estimates for larger scales such as multi-state domains. We also use similar assumptions to average predation and growth over a year. There is naturally some annual variability in growth and predation as birth rates and deer vulnerability vary on seasonal conditions such as snow depth [16, 17]. However this assumption is necessary to maintain an autonomous system of equations for analysis; it will also not inhibit our goal of studying the long term population dynamics.

We wish to model the density of deer D(t) and coyote C(t) populations with respect to time t in months. We begin with a classical Lotka-Volterra predator prey system,

where r is deer growth rate, a is the proportion of deer \that die in coyote-deer interactions, e is the energy that coyotes get from each killed deer, and d is the death rate for coyotes. Note this model is too simplistic, as in the absence of coyote predation, this model assumes exponential growth for deer and, in the absence of deer prey, exponential decay for coyotes. Therefore we introduce terms to more realistically portray coyote and deer interaction and reliance on the environment.

Predation
As in the classical model, we assume that at reasonable densities, deer die only as a result of coyote-deer interactions. In our sample area, this assumption is nearly reasonable as other natural predators of deer (wolves and large cats) are virtually extinct in the area. To represent this density-dependent predation, we use a Holling type II functional response term [10].

Classical mass-action predation terms such as aDC show that as prey increase, the number of prey killed by each predator increases. This is accurate if prey populations are relatively small. If prey become dense, mass action says that each coyote will kill proportionally large numbers of deer. It is more realistic that each coyote will have a maximum number of deer it can eat each month, regardless of how gigantic a population may be. Whether each coyote’s predation is maximized depends on if there are ample deer.

Holling type terms allow us to model such a cap for individual coyote predation (see Figure 1). A Holling type III term is frequently used to model mammal predation as it shows prey switching and lower predation rates if primary prey abundance is low [1]. However as studies carried out by Patterson (2000) indicate, the coyote-deer interactions better resembles a type II response

where α is the maximum deer that a single coyote will consume in one month and β controls how quickly the predation reaches α.

Note that a problem with Holling predation is that it does assume deer must reach an infinite population before individual coyotes maximize their prey consumption. Finally, we modify the Holling term to take into account the density of predators, to include C in coyote-deer interaction term.

Prey Growth
To model deer growth, we choose to use a logistic term:

where Rd is a natural growth rate for the deer population and Kd is a carrying capacity for deer. (The issue of how to chose the carrying capacity is extremely difficult, important to both coyotes and deer, and discussed later.) While the deer populations have exploded in a relatively short time, it does not seem reasonable to assume that deer continually reproduce exponentially. Furthermore, if we were to assume exponential growth and run our model with initial values as large as current population estimates, then the D(t) would quickly overrun the model and inhibit any sort of reasonable study. Recent field surveys also show that deer populations seem to be stabilizing, though only at extremely high densities [7].

Predator Growth
We want the coyote population to be correlated with deer and to grow with increased deer, which the Holling term allows. However if we use the Holling term with the extremely high populations of deer that have been observed in the Northeast, then the model will predict exploding coyote population. As there are reports of human-coyote encounters and coyotes killing pets, coyotes are often seen as a threat by humans and it is reasonable to assume that humans will control coyotes [3, 27]. If the coyote population gets too large, humans will begin killing coyotes. It is reasonable to assume that the density of coyotes that humans will tolerate is below the density of the coyotes that the current prey population could support. To impose this anthropogenic growth barrier, we multiply predation by a logistic growth term with carrying capacity Kc. The importance of Kc will become apparent later.

We also want to consider that coyotes prey on a variety of forest creatures and fruit: they are not solely dependent on the presence of deer [21, 16]. Hence we introduced a term for coyote growth independent of deer:

This term allows Rc to represent the growth from coyotes preying on species other than deer. Multiplying by a logistic term (1 − Kc ) represents the human barrier imposed on this term too. Then in the absence of deer, the coyote population will grow extremely slowly.

Carrying Capacity
The issue of carrying capacity is a delicate issue that arises quite frequently in biological problems. Mathematically, the carrying capacity represents a value which, if the density exceeds this value, growth becomes negative and the population decays back to the carrying capacity. Biologically, carrying capacity is the maximum density of a species that a given habitat can support long-term in ideal conditions.
Mathematically, carrying capacities are typically constants because they approximate long term dynamics. However, many factors affect biological carrying capacity such as food supply, climate, and over crowding. Carrying capacities can vary with respect to what is considered an external factor, such as human development, or an internal factor, such as overabundant inhabitants overgrazing and damaging the environment, thus diminishing their food source. Due to different factors, carrying capacities are an extremely delicate matter; we take Kd, Kc to be constant within our model, however, and find reasonable values from literature that predict maximum population levels.

Specialized Coyote-Deer Model
Taking into consideration all of the terms developed above, we end up with the following model:

To aid our analysis, we nondiminsionalize the system with the following substitutions:

These substitutions give us a system that behaves in the same manner and has the same equilibria as (1) but has fewer visible constants and simpler computations. The nondimensionalized system is

The reader must be cautious and recall while looking at our results that we will work with a dimensionless system. Results are dimensionless and should be analyzed as such.

Analysis

Equilibrium
We are interested in the long term interaction of coyotes and deer, so we look for points of equilibrium. More specifically, we are interested in whether the coyotes can control the deer, i.e. the existence of an equilibrium below the deer carrying capacity that has neither coyotes nor deer extinct. Equilibria occur at the intersections of the nullclines

There are six intersections and thus equilibria for our system. Given as (x, y) they are

Mathematically, coyotes controlling deer translates to an equilibrium with 0 < x< 1 and 0 <y ≤ 1. As all parameters are positive we immediately see the last equilibrium (8) with negative x value is biologically meaningless and we ignore it. It turns out that (7) does satisfy our conditions. However, to show this we must analyze the stability of the remaining relevant equilibria.

Stability
To show stability of equilibria (3) -(7), we evaluate the Jacobian at each equi-librium and examine the signs of eigenvalues. The 2 × 2 jacobian matrix is

At (3) and (5) the eigenvalues are positive and therefore equilibrium (3) and (5) (mutual extinction and coyote extinction, respectively) are unstable.
Evaluating the Jacobian at (4), we obtain

As the eigenvalues of the above matrix are negative, depending upon parameter values, we see that this equilibrium is conditionally unstable. As (4) represents coyotes at carrying capacity and deer extinct, we assume this equilibrium is unstable. Thus we force one negative and one positive eigenvalue and we obtain the parameter restriction

It can easily be shown that this restriction forces equilibrium (6) and (7) to have positive and negative x-values, respectively. We discard equilibrium (7), as biologically meaningless and concentrate on equilibrium (6), the only equilibrium with positive values for both x and y. We expect this equilibrium to be stable. Evaluating the Jacobian yields an upper triangular matrix

Therefore, J11 and J22 are the eigenvalues and must both be negative for the equilibrium to be stable.

where

Squaring the both sides of the above equation we obtain

where the rst inequality uses the restriction 9.

The above inequality is equivalent to

The three coecients of delta in the numerator of J11 can be written as

Hence,

Consider the six terms in the numerator of J11 that are independent of delta. Further, consider that due to energy transfer through tropic levels, ecosystems support more herbivores than carnivores; thus, it is reasonable that Kc < Kd.With this assumption two of the terms in the numerator are

The remaining four terms can be written as

By noting that and combining (11) and (12), we obtain

Therefore, J11 is negative and one of the eigenvalues is negative.

Now we check the sign of the other eigenvalue

Again as parameters are positive, the denominator is positive. Due to the negative sign in front, it only remains to show the sum of terms in the numerator of J22 is positive. Since there is only one negative term in the numerator, we simply show that combined with the other positive terms, the result is positive. Looking at the last three terms in the numerator and using (10) from above, we have

As the last three terms in the numerator of J22 are nonnegative, the entire numerator is positive and J22 is negative.

In conclusion, both eigenvalues J11 and J22 are negative and equilibrium (6) is stable. This means that our coyote-deer system has a stable equilibrium below the maximum number of deer and thus predicts that coyotes will have some controlling effect. Now we numerically examine the extent of this effect.

Numerical Investigation

Setting Parameters
We consider ranges of the parameters which have been gathered from the liter-ature and refine these ranges by choosing values which best model the historical growth of the two populations. The values

gave us results which best correlated with the historical dynamics described: deer population suffered until the 1950’s but exploded by the 1990’s [22] and coyote observations were sparse in the late 1950’s but more common around the 1970’s [6, 19]. This can be seen in Figure (2).

Model Results
We start the model with deer and coyotes at their respective carrying capacities, as these are the presumed current levels, and use MapleTMsoftware to numeri¬cally solve and plot our system. We consider evolution of the system over the next fifty years in Figure 3. We see that without predation, deer stay at their carrying capacity but with coyote predation, deer population drop to the level of (6), the previously found equilibrium. This shows us that coyotes do and will have an effect on deer populations.

The thin horizontal line in Figure 3 represents 20 deer per square mile which is a third of the current deer population levels and is thought to be a natural, healthy level that deer existed at in the presence of the wolf and that deer can continue to exist at without overpopulating the area [15, 4]. This is a level that is recommended by state officials for deer control programs [22, 11]. To begin to control Lyme Disease, municipalities would even like to see deer below this recommended level [5].

Deer Carrying Capactity Kd
As previously discussed, we take carrying capacity to be constant. In Figure 3 Kd is set to current level of deer populations, 60 deer per square mile. However, we can also set Kd to what the literature suggests is a healthy environmen¬tal carrying capacity, 20 deer per square mile. At this level, the logistic term causes the deer population to naturally fall to the healthy environmental carrying capacity of 20 deer per square mile within 300 months without any human intervention or predation. Even with an artificially high initial condition of 10 times the natural carrying capacity or 200 deer per square mile and no coyote predation, the deer fall to reasonable densities fairly quickly as see in Figure 4. This does not reflect reality as deer populations have remained well above the healthy carrying capacity of 20 deer per square mile and have not fallen nor are they showing signs of steep decline. Thus clearly, we cannot use the theoretical carrying capacity but must use the current population levels and let Kd=60.

Additional Coyotes
It is evident from Figure 3 that at their current densities of .2 coyotes per square mile, coyotes will not be able to control deer population to the desired levels. If we are correct in assuming that humans control coyote population size to a certain density Kc, then we can consider what would happen if humans would allow more coyotes to inhabit the area. Perhaps, if coyotes were fostered in the area to act as biological control on deer, we would see reduced numbers of deer.

We study this scenario by raising the value of Kc from .2 to .38, a doubling of the population from current levels in Massachusetts [27]. Figure 5 shows that even doubling the coyote population, while having a larger effect on the deer, will not effectively get deer to or maintain deer at desired levels.

We see that on their own, and at higher levels, coyotes will not control deer population. Even with larger populations (which are unrealistic due to the dense population in the area who are already reluctant to share the landscape with the carnivore), coyotes cannot control the deer. Hence, we now consider other efforts that would have to be taken to control deer and add the effect of culling to our model.

Culling

Analytically, we found that coyotes would indeed control the deer population to some extent. The level that coyotes could keep the deer at are below the observed densities of 60 deer per square mile as illustrated in Figure 4. However, within the range of realistic parameters, our model does not show that coyotes are capable of controlling deer to or below the 10 deer per square mile densities that are presumed optimal for controlling Lyme disease or reducing the nuisance of deer. Thus, we establish that external forces, such as active culling, will be needed to control deer and we look at some of the types of culling.

We begin our study by reconsidering deer growth and modifying our original equation to take into account a loss of deer due to an external active cull. We consider two types of culling: proportional and lump.

We consider the culling strategy that takes a proportion of the population by introducing μ, the proportion of deer killed. Frequently this sort of culling is used for modeling, as it allows for simpler analysis than the more realistic constant value or lump culling.

Continuous Proportional Culling
We consider the culling strategy that takes a proportion of the population by introducing μ, the proportion of deer killed. Frequently this sort of culling is used for modeling, as it allows for simpler analysis than the more realistic constant value or lump culling. The dimensionless culling model is

Figure 6 shows the sensitivity of the model to coyote predation. Killing only 1.2% of the deer population has a drastic effect of reducing deer. However this model also illuminates the importance of coyotes. With the increased pressure of culling, deer are more susceptible to the effect of coyote predation.

However this model seems unrealistic as culling 1.2% of deer is an extremely small portion of the deer. It is projected that much larger culls will be needed to control deer populations. We propose that the fallacy of this model is due to it assuming that the culling is continuous. This means the model is continuously taking 1.2% of the deer which is clearly unrealistic. Culling programs tend to be administered through seasonal culls or taking deer at only one period in the year and not continuously. We would like to take this seasonal culling into consideration, and thus we need a way to make the effect of culling on our model discrete.

Discrete (Seasonal) Culling
To study a realistic seasonal, proportional culling model, we utilize numerical approximating packages in MapleTM . We write a simple program which runs the system for a year, stops and subtracts a proportion μ of the deer population, and then uses that value as an initial condition for the next year. This program outputs a plot of projected yearly deer population values.
We see that with discrete culling much larger proportions of deer need to be killed in order to control populations (Figure 6). This is the more realistic result that we hoped to achieve with our MapleTMprogram. However one problem remains: time. Looking at the horizontal axis we see that culling a set proportion takes too long to get deer to or below desired levels.

Conclusion

We looked at the potential of the eastern coyote as a natural biological control on deer populations. Analytically, we found one interesting stable equilibrium in which the coyotes control the deer. Numerically analyzing this equilibrium for reasonable parameters, we find that although the coyote do have some effect on controlling deer populations, it is not enough to keep deer populations below the desired levels (20 deer per square mile). Therefore, we investigated potential anthropogenic control through culling strategies. We found that modeling continuous culling was unrealistic so we wrote a program to represent seasonal culling. This annual culling model gave far more realistic predictions yet still showed that it would take too long to control the deer if the proportion culled each year was kept constant. In conclusion, we proposed the following plan: Cull deer at high rates for a short time period, say five years, and then at more modest rates to maintain healthy populations. The predictions for such a strategy can be see in Figure 7.

Acknowledgements

The authors acknowledge with gratitude support for this work provided by the NSF REU Site Grant at Texas A&M University DMS-0552610 as well as their advisors Dr. Jay R. Walton and Dr. Yuliya Gorb.

Orianna DeMasi is a senior majoring in mathematics at McGill University. Kathy Li is a sophomore majoring in Mathematical Economic Analysis at Brown College.

References

1. Allen, L. ”An Introduction to Mathematical Biology”, 2006.
2. Ballard, W., H. Whitlaw, S. Young, R. Jenkins, and G. Forbes Predation and survival of white-tailed deer fawns in north central New Brunswick, J. of Wildlife Management, Vol. 63, pp 574-579, 1999.
3. Berchielli, L.,Impacts of Urban Coyotes on People and Pets in New York State, Proceedings of Wildlife Damage Managment Conference, 2007.
4. DeCalesta, D. Effect of white-tailed deer on songbirds within managed forests in Pennsylvania, J. Wildlife Management, Vol. 58, pp 711-718, 1994.
5. Deer Alliance. Retrieved July, 15, 2008. Web site: http://www.deeralliance.ie/
6. Fener, H., J. Ginsberg, E. Sanderson, M. Gompper Chronology of Range Expansion of the Coyote, Canis latrans, in New York, Canadian Field-Naturalist, Vol. 119, No. 1, pp 1-5, 2005.
7. Gregonis, M. 2006/2007 Aerial deer survey indicates stable population, Connecticut Wildlife, Vol. 27, No. 3 pp3, 2007.
8. Gompper, M. The ecology of northeast coyotes: current knowledge and priorities for future research, Wildlife Conservation Society Working Paper, Vol.17, pp 1-47, 2002.
9. Gompper, M., R. Kays, J. Ray, S. Lapoint, and D. Bogan, A Comparison of Noninvasive Techniques to Survey Carnivore COmmunities in Northeastern North America,Wildlife Society Bulletin, 2006.
10. Holling, C.S. Components of predation as revealed by a study of small-mammal predation of the European Pine Sawfly, Canadian Entomologist, Vol. 91, pp 293-320, 1959.
11. Kilpatrick, H. J., and LaBonte, A. M. ”Managing Urban Deer in Connecticut: a guide for residents and communities”, Hartford: Bureau of Natural Resources, 2007.
12. Kyle, C. J., Johnson, A.R., Patterson, B.R., Wilson, P.J., Shami, K., Grewal, S.K., and White, B.N. Genetic nature of eastern wolves: past, present, and future, Conservation Genetics, Vol. 7, pp 273-287, 2006.
13. Larviere, S. and Crete, M. The size of eastern coyotes (Canis latrans): a comment, J. of Mammalogy, Vol. 74, pp 1072-1074, 1993.
14. Mathews, N. E., and Porter, W.F. Maternal defense behavior in white-tailed deer, pp 141-160 in A. H. Boer, editor. ”Ecology and manage¬ment of the Eastern Coyote. University of New Brunswick”, Fredericton, New Brunswick, Canada, 1992.
15. Mc Shea, W.J., and Rappole, J.H., Managing the abundance and diversity of breeding bird populations through manipulation of deer populations, Conservation Biology, Vol. 14, No.4, pp 1161-1170, 2000.
16. Patterson, B.R., Benjamin, L.K., and Messier, F. Prey switching and feeding habits of eastern coyotes in relation to snowshoe hare and white-tailed, Canadian Journal of Zoology, Vol.76, pp 1885-1897, 1998.
17. Patterson, B.R. and Messier, F. Factors influencing killing rates of white-tailed deer by coyotes in eastern Canada, J. Wildlife Management, Vol. 64, No. 3, pp 721-732, 2000.
18. Peterson, R.O., and Thurber, J.M. The size of eastern coyotes ( Canis latrans): a rebuttal, J. of Mammalogy, Vol. 74, pp 1072, 1993.
19. Pringle, L.P. Notes on coyotes in Southern New England, J. Mammal, Vol.41, pp 278, 1960.
20. Rand, P.W., Lubelczyk, C., Holman, M.S., Lacombe, E.H., and Smith Jr., R.P. Abundance of Ixodes scapularis (acari: Ixodidae) after complete removal of deer from an isolated offshore island endemic for Lyme disease, J. Medical Entomology, Vol. 41, pp 779-784, 2004.
21. Samson, C., M. Crete,Summer food habits and population density of coyotes, Canis latrans in boreal forests of southeastern Quebec, Canadian Field-Naturalist, Vol. 111, No. 2, pp 227-233, 1997.
22. Stafford, K.C. 2004. ”Tick management handbook: an integrated guide for homeowners, pest control operators, and public health officials for the prevention of tick-associated disease”, The Connecticut Agricultural Ex-periment Station, New Haven, Connecticut, 2004.
23. Thurber, J. M. and Peterson,R. O. Changes in body size associated with range expansion in the coyote ( Canis latrans), J. of Mammalogy, Vol. 72, pp 750-755, 1991.
24. Walter, W.D., Perkins, P.J., Rutberg, A.T., and Kilpatrick, H.J. Evaluation of immunocontraception in a free-ranging suburban white-tailed deer herd, Wildlife Society Bulletin, Vol. 30, pp 186-192, 2002.
25. Wilson, M.L., Ducey, A.M., Litwin, T.S., Gavin, T.A., and Spiel-man, A. Microgeographic distribution of immature Ixodes dammini ticks correlated with deer, Medical and Veterinary Entomology, Vol. 4, pp 151¬159, 1990.
26. Wilson, M.L., Adler, G.H., and Spielman, A. Correlation between abundance of deer and that of the deer tick, Ixodes dammini (Acari: Ixodi¬dae), Annuals of the Entomological Society of America, Vol. 7, pp 172-176, 1985.
27. Way, J.W., Ortega, I.M., and Auger, P.J. Eastern coyote home range, territoriality, and sociality on urbanized Cape Code, Northeast Wildlife, Vol. 57, 2002.

No Bull: Science, Manufacture, and Marketing of Red Bull and Other Energy Drinks

by: Zeno Yeates, ’10

(You may also see the full spread of this article in a PDF.)

The increasing prevalence of energy drinks over the past decade is a phenomenon that cannot simply be dismissed as a passing obsession. What began with the advent of Red Bull in 1984 has evolved into a colossus of different brands claiming anything from sharpened mental acuity to enhanced athletic performance. Austrian-born Red Bull founder and CEO Dietrich Mateschitz relies on the younger generation for his sales base, exploiting the teenage drive for risk-taking and adventure using dramatic product names, draconian logos, and sponsorship of extreme sporting events [1]. Predictably, a multitude of competitors have followed suit, introducing similar concoctions with dicey names such as Cocaine, Dare Devil, Pimp Juice, Venom, and Monster. However, none of the claims of enhanced performance have been qualified by the U.S. Food and Drug Administration, thereby conjuring substantial criticism from the media and a variety of third-party organizations [2]. Yet at this point in time, neither advocate nor critic speaks with certainty.

All things considered, the chronicles of Red Bull are more of tenacious entrepreneurship than of science. However, the rationale behind its chemical formula reveals the potentially medicinal properties of its components. While overseas as a traveling toothpaste salesman, Mateschitz discovered the revitalizing effect of a syrupy tonic sold at local pharmacies in Thailand. The mixture, composed of only water, sugar, caffeine, taurine, and the carbohydrate glucuronolactone, soon became a mainstay remedy for his chronic jet-lag. After reading a financial article listing the top ten taxpayers in Japan, he was surprised to learn that a certain Mr. Taisho, who manufactured a similar restorative beverage, was listed among the other entrepreneurs. The ingredients were explicitly listed on the can itself, and neither trademark nor patent existed to protect its formula; hence, Red Bull was born [3].

Careful observation of any university library will reveal the undeniable popularity of iPods and Red Bulls – the arsenal for the true titan of academic endeavor confronting a full night of intellectual tribulation. Nevertheless, some conjecture whether Red Bull’s buzz serves only to distract the active mind in the same way that prolonged auditory stimulation seems to. The most immediate answer is given on the container itself, which specifically claims to improve performance in times of elevated stress or strain, increase endurance, increase reaction speed, and stimulate metabolism [4]. Surprisingly, the only active ingredients aside from water, sugar, and caffeine, are the amino acid taurine and the carbohydrate glucuronolactone [3]. A series of past Japanese studies on taurine had suggested cardiovascular benefits, further convincing Mateschitz of its revitalizing properties. However, taurine is not an entirely unfamiliar biochemical intermediate as some people might imagine. From a human physiological standpoint, taurine (2-aminoethanesulphonic acid, [NH2-CH2-CH2-SO4]) is a major constituent of bile found in the lower intestine [5]. Produced in the liver and brain, taurine plays an important role in the regulation of osmolarity, muscle contraction, and neuroprotection [6]. While technically not considered an amino acid (it conspicuously lacks a carboxyl group), taurine is a derivative of the sulfhydryl amino acid cysteine, and it constitutes the only naturally-occurring sulfonic acid [7]. Human taurine synthesis occurs in the liver, although it is also naturally produced in the testicles of numerous mammals, and urban legends suggest that the commercial sources of taurine are derived from bull urine and semen. Although taurine is found in both of these sources, the pharmaceutical industry actually obtains taurine from isethionic acid, which in turn, is obtained from the reaction of ethylene oxide with aqueous sodium bisulfate. In 1993 alone, approximately 5,000-6,000 tons of taurine were produced [8].

However, it must be noted that taurine is not the only active ingredient among the host of energy drinks on the market today. Generally, a given energy drink will include an amalgamation of caffeine, B-vitamins, and herbal extract. Other common ingredients include guarana, ginseng, L-carnitine, glucuronolactone, and ginko biloba. Many contain high levels of sugar, but many brands also offer artificially-sweetened “low-carb” versions. Nevertheless, the primary ingredient in nearly all energy drinks is caffeine, of which the average 16-fluid ounce serving contains 150 mg [2]. Little is known about the health effects of taurine and glucuronolactone, other than the fact that the given quantities in stimulant drinks are several times higher than that of a normal diet [9].

Nevertheless, taurine plays a major regulatory role within the human body. Found in high concentrations in skeletal muscles, it functions in regulating myofibril contraction. It increases force generation by enhancing the accumulation and release of calcium ions within the sarcoplasmic reticulum. Increasing intracellular taurine levels also augments the mean rate of increase in the force response. It has been suggested that the balance of endogenous myofibril taurine concentrations is critical for maintaining the appropriate force output during muscle contraction. Muscle fibers possibly modulate their contractility by increasing the taurine levels in response to neuronal inputs. Considering taurine’s role in muscle contraction, it may appear that increasing blood taurine concentration through dietary intake could enhance contractile force. However, considering that intracellular taurine concentration is tightly regulated [10], it remains unknown whether an increase in taurine plasma levels following consumption would have any noteworthy effect.

Since taurine exerts neuroprotective activity against excitotoxicity and oxidative stress, it is no surprise that massive amounts are released in the event of an ischemic episode. An in vivo analysis indicated that during ischemia, a seventeen-fold increase in taurine levels is typically observed in the brain. However, one must realize that there are two sources of taurine in the brain; direct synthesis from neurons and transport across the Blood-Brain Barrier (BBB). The Blood-Brain Barrier constitutes the membrane surrounding blood vessels leading to the brain, which regulates the exchange of molecules. The BBB is highly permeable to non-polar compounds but less permeable to polar ones. This regulation prevents harmful substances from entering the brain and only permits the passage of substances necessary for normal brain function. While caffeine can readily diffuse across the BBB, the entry of taurine seems to be regulated more rigidly [11].

Taurine is present in high concentrations throughout the brain, and it has been hypothesized that ingesting taurine in conjunction with caffeine improves concentration and reaction speed while also enhancing emotional status. Seidl et al. performed a double-blinded, placebo-controlled study in which the experimental group ingested a capsule containing caffeine, taurine, and glucuronolactone whereas the control group received a placebo capsule. The authors reported that members of the experimental group had shorter motor reaction times and better overall psychological well-being when evaluated. Hence, they concluded that taurine in conjunction with caffeine and glucuronolactone had positive effects on cerebral function. They also conjectured that taurine might interact with GABAergic, glycinergic, cholinergic, and adrenergic neurotransmitter systems. Nevertheless, they also agreed upon the possibility that such findings on cognitive performance may have been attributable solely to caffeine [12]. Unfortunately, none of the above experiments examined the possibility that caffeine alone could have produced such results. It is widely known that caffeine competitively inhibits adenosine receptors and thereby increases cAMP concentration [13]. This blockade can free cholinergic neurons from inhibitory control, leading to pervasive excitatory responses and the suppression of fatigue. These properties of caffeine alone may explain the favorable cognitive and emotive influences as demonstrated by the experiments [14].

Further refutation of the study by Seidl et al. stems from the fact that sodium and chloride dependent taurine transporters exist in the BBB. The activity of these transporters is closely regulated by transcription of the genes encoding them [16]. Such transcription seems to be dependent on the degree of cell damage, osmolality, and level of taurine in the brain, thereby suggesting that active expression of this gene serves as an acute response to neuronal perturbance or crisis. Hence, it is intuitive that under normal non-ischemic conditions, taurine levels within the brain are maintained at a stable level [15]. Therefore, an increase in the taurine plasma level resulting from dietary supplementation is not likely to cause a sudden influx of taurine into the brain. Furthermore, considering the substantial amount of endogenous taurine already present in the brain, it is questionable whether any entry would make a significant difference to the overall concentration [16].

Taurine itself is naturally present in a variety of meat, seafood, and milk [17]. However, taurine from the consumption of energy drinks is several times higher than that from the intake of a normal diet [5]. Under normal physiological circumstances, taurine is highly-conserved in the adult human body and present in relatively large quantities [9]. It has been estimated that a 70 kg (155 lb.) human is likely to contain 70 g of taurine, and the mean daily intake has been estimated to be somewhere between 40 and 400 mg [5]. In contrast, many energy drinks may contain up to 4,000 mg of taurine. Nevertheless, there is almost no data to suggest that consumption of taurine alone poses any substantial risk to human health. However, Simon, Michele, and Mosher’s study examined the growing trend of mixing energy drinks with alcohol. It was determined that blending energy drinks with alcohol greatly increased the number of energy drinks consumed per session, particularly in males aged 19-24 years. In addition, the data suggests that taurine may somewhat ameliorate the unfavorable effects of alcohol consumption [18]. Conversely, alcohol is known to exercise an inhibitory effect on taurine homeostasis in humans. The implication is that the massive influx of taurine from energy drinks encourages binge drinking. The advent of the Jägerbomb aptly reflects the social understanding of the antagonistic effects of both compounds. To this effect, drinkers attest to more reckless behavior and to a greater overall capacity for consumption. A study conducted in early 2006 concluded that combining energy drinks with alcohol predisposes drinkers to alcohol abuse since the depressant effects of the alcohol are somewhat mitigated by the stimulant effects of the energy drink [19]. Additional concern exists for the havoc that the depressant-stimulant combination wreaks on the heart. Alcohol alone, if abused, has been shown to reduce brain activity, impair cardiac function, and potentially lead to myocardial infarction [20]. In combination with an energy drink, effects on the consumer may include shortness of breath and an irregular heartbeat. Moreover, the body’s defenses are weakened by the dehydration from alcohol and caffeine, both of which are diuretics [21].

Yet despite the injurious social trends that have become associated with energy drinks, many studies have demonstrated the applicative efficacy of the products in their pure form. With regards to the psychological effects of Red Bull Energy Drink, two studies reported significant improvements in cognitive performance in addition to increased mental alertness [22]. Moreover, consumption of energy drinks may induce a mild to moderate euphoria primarily caused by the stimulant properties of caffeine and ginseng [16]. The restorative properties were attributed to a combination of caffeine and sugar in energy drinks, though a concerted effect between glucose and caffeine has also been suggested. Concerning generalized physiological effects, the consumption of Red Bull alone was shown to promote endurance during repeated cycling tests in young healthy adults [23].

The short term physiological effects of energy drinks were most thoroughly examined by Alford, Cox, and Wescott in a series of three studies conducted on a small population of students from the University of Bristol in England. The studies investigated psychomotor performance (reaction time, concentration, and memory), subjective alertness, and physical endurance. When compared with control drinks, containing neither taurine, caffeine, nor glucoronolactone, Red Bull significantly improved aerobic endurance in addition to anaerobic performance in stationary cycling tests. Significant improvements in mental performance were also noted, especially with respect to choice reaction time, concentration, and memory. These consistent improvements in both mental and physical performance were interpreted as reflecting the combined effects of the active ingredients [22]. The same study was conducted based on the fact that Red Bull contains several active components (taurine, glucuronolactone, caffeine, and several B-vitamins), which render a multiplicity of effects on human metabolism. Perhaps more fundamentally, Red Bull also contains glucose, which is metabolized to release energy during both aerobic and anaerobic metabolism and may improve cognitive performance [14].The fact that Red Bull is also endowed with a host of B-group vitamins must not be overlooked however. It is widely known that vitamin B-12 (cyanocobalamin) plays a critical role in human brain function and is intimately associated with energy production [24]. Nevertheless, the most important conclusion drawn from this study is that the anti-hypertensive actions of taurine may oppose increases in blood pressure from caffeine, reflecting that the ingredients act in concert and not independently to achieve the observed effects [14]. Although most improvements in mental acuity are attributed to the caffeine content, it should be noted that energy drinks contain several other biologically-active ingredients that possibly contribute to this effect [15].

Perhaps surprisingly, the biological effects and health consequences of caffeine, despite extensive research, remain the subject of ongoing debate. In the UK, the mean daily caffeine intake from tea, coffee, and carbonated beverages is estimated to be 278 mg/day for a typical 70 kg male [25]. In addition to coffee, tea, and carbonated beverages such as soft drinks, caffeine is present in many medications, headache treatments, and diet pills. In fact, caffeine is an active ingredient in more than 70% of the soft drinks consumed in the United States [12]. However, one must assess not only the quantity ingested but also the rate at which it is metabolized. All of the caffeine contained in a cup of coffee (115-175 mg) is cleared from the stomach within forty-five minutes of ingestion [26]. The caffeine is then absorbed from the small intestine, but does not accumulate in the body as it is rapidly metabolized by the liver, with a half-life of 4 hours for a normal adult [27]. Increased half-life values can be found in women using oral contraceptives (5-10 hrs) and in pregnant women (9-11 hrs) [28]. One review of caffeine dependence studies shows a wide variety of withdrawal symptoms including headache, irritability, drowsiness, mental confusion, insomnia, tremors, nausea, anxiety, restlessness, and increased blood pressure [29]. It is interesting to note that the symptoms of caffeine withdrawal also occur in the case of excess consumption [30]. On the other hand, lower doses of caffeine (20-200 mg per day) have been associated with positive effects on mood, such as perceived feelings of increased energy, imagination, efficiency, self-confidence, alertness, motivation, and concentration. While caffeine is reported to reduce reaction time during simple tasks, the effect is thought to be from enhancing coordination rather than from accelerating mental activity [16].

In contrast to the more detrimental pursuits practiced by consumers, the newest line of energy drinks is marketed with a health component, suggesting benefit to individuals partaking in athletic pursuits. The progeny of this innovation is Free Radical Scavenger Energy (a.k.a. FRS Energy), which constitutes the newest line of energy tonics. Accordingly, Lance Armstrong is already on board for endorsements. The legendary cyclist makes the supportive claim: “With all I have going on, I need a source of sustained energy. FRS fits in line with me wanting to be ninety, wanting to keep running marathons, riding my bike, being fit, and having fun.” Apparently, the marketing angle on energy drinks has progressed from late-night fix to sheer invincibility. The basic premise of FRS energy is that it simultaneously fights fatigue and cancer. FRS differs from its competitors in that it is endowed with a different central component known as quercetin, which plays a dual role in the human body both as a stimulant and as a flavonoid [31]. Flavonoids are typically secondary plant metabolites known to sequester numerous mutagens and carcinogens [32]. But FRS also claims that this plant derivative also combats fatigue by inhibiting the enzyme Catechol-O-methyltransferase (COMT), which is responsible for the degradation of catecholamine neurotransmitters including dopamine, epinephrine, and norepinephrine. These neurotransmitters are largely responsible for the “fight or flight” response generated in times of duress [33]. Contrary to other energy drinks, FRS claims not to have the commonly associated withdrawal and crash effects [32]. This claim is based on the fact that quercetin has a physiological half-life of 16 hours, as compared to the 4-hour half-life of caffeine, thereby extending its window of physiological effects on the body. Quercetin also helps to fight cellular damage and fatigue caused by the oxidants that accumulate from daily activity, exercise, and stress [34].

The mass-manufacture of quercetin might present a temporary loophole in high-powered athletic drug screening. Currently, caffeine use in Olympic sports is a contentious issue, and the International Olympic Committee (IOC) considers a urinary concentration of caffeine of 12 mg/L to be a positive drug test. This is due to the fact that caffeine has been shown to have performance-enhancing effects at concentrations that would result in a urinary excretion below 12 mg/L as set by the IOC [35]. Caffeine is reported to be most beneficial in endurance-type exercise activities [36]. Nevertheless, there is still no consensus on the biological mechanism to explain performance improvements. However, one possibility is that caffeine lowers the threshold for the release of exercise-induced endorphin and cortisol—hormones which may contribute to the reported benefits of caffeine on exercise endurance. Also, there exists no widely-accepted evidence that such performance-enhancing effects increase with additional caffeine doses [26]. On the contrary, the dehydration effects of caffeine and the absorption inhibition effects of glucose pose a serious threat to an athlete training in warm weather conditions [14].

It seems as if the principle concerns regarding energy drinks stem from one of the two associated risks of ingesting something with a high caffeine content. One is the possibility of caffeine overdose that can result in tremors, seizures, or even death [37]. The LD50 for caffeine depends on body mass, but ranges from 13-19 g as the mean lethal dose for a 70 kg (155 lb) male, which equates to approximately 80 cups of coffee [38]. Many deaths have been linked to the excessive caffeine content in energy drinks, which is a consequence of the misleading display of nutrition facts on the container. Manufacturers often limit the serving size to half or even a third of the bottle and do not factor in the hidden caffeine content of the herbal additives such as guarana. Such commercial subterfuge allows for rapid ingestion of large quantities of caffeine under the assumption that you are consuming nothing more than the equivalent of two cups of coffee [21]. The second associated risk is that of dehydration, which also results from excessive caffeine intake [39]. Claims that energy drinks enhance athletic performance have led to consumption both before and after athletic activity. Coupling fluid loss from exercise with the diuretic properties of caffeine facilitates accelerated dehydration rates [26]. Reflecting on the recent proliferation of the energy drinks industry, the Stimulant Drinks Committee of the British Nutrition Committee issued a series of recommendations for the consumer. Primarily, the committee advises discretion in the consumption of stimulant drinks with alcohol. Likewise, it deems energy drinks unsuitable for children under the age of sixteen, for pregnant women, or for individuals sensitive to caffeine. Finally, it recommends that stimulant drinks not be consumed as a thirst quencher in association with sports and exercise [36]. Even so, the true test of the efficacy and safety of energy drinks will only come from decades of widespread consumption by the general populace.

Nevertheless, one must consider the societal ramifications of widespread consumption. Assuming that energy drinks do confer some sort of competitive advantage, whether it be in the athletic or intellectual realm, what are the consequences? This same question was posed by David Eagleman, assistant professor of neurobiology and anatomy at Baylor College of Medicine, during the Rice Scientia lecture last fall. He stressed one factor that must be taken into consideration is the relatively high cost of energy drinks, as most products sell for about $3 a can. Eagleman also made the very apt remark that many gateways to economic success are based on standardized tests mandating a specified degree of mental capability [40]. He went on to insinuate that socio-economic disparities might be exacerbated when the affluent have unrestricted access to such mind-enhancing products. Competition for high-stakes testing such as the SAT is enormous and reports abound of identity fraud and violation of test protocols that are in place to ensure the standardized nature of these monolithic rites of passage into the professional world [41]. Operating under the assumption that energy drinks do confer some real performance advantage, one can only speculate on how the retail of such a competitive edge will influence existing class disparities. In the same light, any student has heard of someone taking self-prescribed Adderall to jack their focus for the next big exam. When school becomes a sport, what will be the next regulatory countermeasure, retinal scans and blood testing before the MCAT? Nevertheless, despite the debatable efficacy of existing energy drinks, their development marks the realization of a concept that will likely be pursued in the future. That is, the idea of biological performance enhancement not for sport, but for academia will be manufactured, marketed, and sold in a diversity of ways.

In sum, what began with the introduction of Red Bull twenty-five years ago has proliferated into an assortment of products that demonstrate short-term physiological benefits. However, the effects of regular long-term consumption have yet to be determined. Although energy drinks have traditionally associated themselves with risk-taking and subversive behaviors, the consolidation of various stimulant and focus-enhancing substances into one beverage represents an effort to maximize productivity. Not surprisingly, new product lines supported by world-class athletes claiming added health benefits constitute the newest progression for the energy drink enterprise. Finally, one must question the socio-economic ramifications of widespread use of legalized performance enhancers. Nevertheless, for now at least, the buying and selling of human energy remains caveat emptor, though there is no better remedy for the infamous all-nighter.

Zeno Yeates is a junior double-majoring in Biochemistry & Cell Biology and English at Sid Richardson College.

References

1. Noonan, David. “Red Bull’s Good Buzz.” Newsweek. 14 May 2001: 63.
2. Heidemann, M., Urquhart, G., and Briggs, L. A Can of Bull? 20 June 2005. Michigan State University, Division of Science and Mathematics Education. 8 Feb 2009.
3. Gschwandtner, Gerhard. “The Powerful Sales Strategy Behind Red Bull.” Selling Power. Sept. 2004: 60-70.
4. Red Bull International. May & June 2007. 27 Mar. 2009 .
5. Bouckenooghe T, Remacle C, Reusens B (2006). “Is taurine a functional nutrient?”. Curr. Opin. Clin. Nutr. 9 (6): 728–733.
6. Chepkova AN, Doreulee N, Yanovsky Y, Mukhopadhyay D, Haas HL, Sergeeva OA. Long-lasting enhancement of corticostriatal neurotransmission by taurine. Eur. J. Neurosci. 2002 Oct;16(8):1523-30.
7. Brosnan J, Brosnan M (2006). “The sulfur-containing amino acids: an overview.”. J Nutr 136 (6 Suppl): 1636S–1640S.
8. Kurt Kosswig. Sulfonic Acids, Aliphatic. in Ullmann’s Encyclopedia of Industrial Chemistry. Wiley-VCH, 2000
9. Urquhart N, Perry TL, Hansen S, Kennedy J. Passage of taurine into adult mammalian brain. Journal of Neurochemistry. 1974 May;22(5):871-2.
10. Bakker AJ, Berg HM. Effect of taurine on sarcoplasmic reticulum function and force in skinned fast-twitch skeletal muscle fibres of the rat. J. Physiol. 2002 Jan;538:185-94.
11. Kang YS, Ohtsuki S, Takanaga H, Tomi M, Hosoya K, Terasaki T. Regulation of taurine transport at the blood-brain barrier by tumor necrosis factor-alpha, taurine and hypertonicity. J Neurochem. 2002 Dec;83(5):1188-95.
12. Seidl R, Peyrl A, Nicham R, Hauser E. A taurine and caffeine-containing drink stimulates cognitive performance and well-being. Amino Acids. 2000;19(3-4):635-42.
13. Fisone, G; Borgkvist A, Usiello A (April 2004). “Caffeine as a psychomotor stimulant: mechanism of action”. Cell Mol Life Sci 61 (7–8): 857–72.
14. Warburton DM, Bersellini E, Sweeney E. An evaluation of a caffeinated taurine drink on mood, memory and information processing in healthy volunteers without caffeine abstinence. Psychopharmacology. (Berl). 2001 Nov;158(3):322-8.
15. Benrabh H, Bourre JM, Lefauconnier JM. Taurine transport at the blood-brain barrier: an in vivo brain perfusion study. Brain Res. 1995 Sep 18;692(1-2):57-65.
16. Machado-Vieira R, Viale CI, Kapczinski F.Mania Compounds associated with Energy Drinks: the possible role of caffeine, taurine, and inositol. Can. J. Psychiatry. 2001 Jun;46(5):454
17. Kohlmeier, Martin. Nutrient metabolism. London: Academic P, (424) 2003.
18. Simon, Michele, and James Mosher. Alcohol, Energy Drinks, and Youth: A Dangerous Mix. Marin Institute. 2007.
19. Combining Alcohol And Popular Energy Drink Reduces The ‘Perception’ Of Impairment Science Daily, 30 March 2006.
20. Jager Bombs stir explosive consequences. The Daily Cardinal, 17 April 2008.
21. Rose, Rachel. “The Dangers of “Energy” Drinks.” ABC 33/40. 15 July 2008. American Broadcasting Company. 14 Feb. 2009 .
22. Alford C, Cox H, Wescott R.The effects of red bull energy drink on human performance and mood. Amino Acids. 2001;21(2):139-50.
23. Horne J.A, Reyner L.A. Beneficial effects of an “energy drink” given to sleepy drivers. Amino Acids. 2001;20(1):83-9.
24. and Judith Voet (1995). Biochemistry (2nd ed.). John Wiley & Sons Ltd. pp. 675.
25. Laurent D, Schneider KE, Prusaczyk WK, Franklin C, Vogel SM, Krssak M, Petersen KF, Goforth HW, Shulman GI. Effects of caffeine on muscle glycogen utilization and the neuroendocrine axis during exercise. J. Clin. Endocrinol. Metab. 2000 Jun;85(6):2170-5.
26. Bolton, Ph.D., Sanford; Gary Null, M.S. (1981). “Caffeine: Psychological Effects, Use and Abuse”. Orthomolecular Psychiatry 10 (3): 202–211.
27. Baum M, Weiss M. The influence of a taurine containing drink on cardiac parameters before and after exercise measured by echocardiography. Amino Acids. 2001;20(1):75-
28. Ortweiler, W; Simon HU, Splinter FK, Peiker G, Siegert C, Traeger A. (1985). “Determination of caffeine and metamizole elimination in pregnancy and after delivery as an in vivo method for characterization of various cytochrome p-450 dependent biotransformation reactions”. Biomed Biochim Acta. 44 (7–8): 1189–99.
29. Juliano, L M (2004-09-21). “A critical review of caffeine withdrawal: empirical validation of symptoms and signs, incidence, severity, and associated features”. Psychopharmacology 176 (1): 1–29.
30. “Caffeine-related disorders”. Encyclopedia of Mental Disorders. Retrieved on 2006-08-14.
31. FRS Healthy Energy. April & may 2008. 27 Mar. 2009 .
32. “Studies force new view on biology of flavonoids”, by David Stauth, EurekAlert!. Adapted from a news release issued by Oregon State University.
33. “Catecholamine.” Dorland’s Medical Dictionary.
34. Murakami A, Ashida H, Terao J. Multitargeted cancer prevention by quercetin. Cancer Lett. 2008 Oct 8;269(2):315-25.
35. Finnegan, Derek. “The Health Effects of Stimulant Drinks.” Nutrition Bulletin. 28.2 (2003): 147-55.
36. Graham, TE; Spriet, LL (December 1991). “Performance and metabolic responses to a high caffeine dose during prolonged exercise”. J Appl Physiol 71 (6): 2292–8.
37. Leson CL, McGuigan MA, Bryson SM (1988). “Caffeine overdose in an adolescent male”. J. Toxicol. Clin. Toxicol. 26 (5-6): 407–15.
38. Peters, Josef M. (1967). “Factors Affecting Caffeine Toxicity: A Review of the Literature”. The Journal of Clinical Pharmacology and the Journal of New Drugs (7): 131–141.2009)
39. Maughan, RJ; Griffin J (2003). “Caffeine ingestion and fluid balance: a review.”. J Human Nutrition Dietetics 16: 411–20.
40. Eagleman, David. “Rice University Webcasts: The Brain and the Law.” Rice University: Live Webcasts & Archives. 28 Mar. 2009
41. Rivera, Carla. “SAT, ACT cheats face no penalty.” Los Angeles Times [Los Angeles] 14 July 2008, B.1 sec.

Characterization of a Recently-Discovered Mutant Fetal Hemoglobin

by: Arindam Sarkar and Dr. John S. Olson

For the full article complete with figures, please see the pdf of this article from the magazine.

Abstract

Last summer, Dr. Mitchell Weiss and his colleagues at Children’s Hospital of Philadelphia discovered a new hemoglobinopathy in a baby from Toms River, New Jersey, who was born cyanotic and with enlarged spleen and liver tissues. Sequencing of the baby’s hemoglobin alleles revealed a missense mutation in a segment of DNA that codes for the gamma chains of fetal hemoglobin (HbF), the oxygen-carrying protein in red blood cells of human fetuses. The objective of our work is to use recombinant DNA technology to construct the Hb Toms River mutation, γ Valine 67 (E11) Methionine, in plasmid DNA which can then be used to express and purify mutant protein using E. coli. We plan to characterize the mutant HbF in order to understand its clinical manifestations and, perhaps, to develop treatments options. This paper provides an overview of HbF developmental biology, our initial hypothesis of how the Hb Toms River mutation might lead to cyanosis, and our strategy for expressing and characterizing the γ Val67 to Met mutation in recombinant HbF.

Introduction

During a consultation with pediatricians at Children’s Hospital of Philadelphia, Dr. Mitchell Weiss discovered a new blood disorder in a child who was born cyanotic and with an enlarged spleen and liver. These symptoms resolved roughly two months after her birth, and she was normal and healthy by six months. Based on their initial clinical observations, Dr. Weiss and the treating physicians suspected that a mutant fetal hemoglobin might be the cause of the baby’s symptoms, so they drew small amounts blood samples for analysis from the baby several days after birth. They discovered that her condition appears to fall into a class of hematological disorders known as hemoglobinopathies, which are genetic defects in the DNA sequences that produce hemoglobin. Hemoglobin is the primary oxygen transport protein in humans, and when the baby’s DNA was analyzed, it was discovered that a Val67 to Met (V67M) mutation was present in one of the child’s γ chain alleles. This mutation occurs in a region of DNA that gives rise to the eleventh amino acid along the E helix of the γ globin chain, which is called Val (E11) for its spatial location in the three dimensional structure of hemoglobin subunits (Figure 1).

The original mutant fetal protein could not be studied directly because of physical and clinical limitations which prevent the withdrawal of significant amounts blood from an anemic infant. Another problem was that fetal hemoglobin production switches to adult hemoglobin production shortly after birth as part of normal developmental processes. Thus, resolution of the cyanotic condition occurred when the γ gene (characteristic of HbF) was silenced, and only normal adult hemoglobin was present in the baby’s red blood cells, which occurred 6 to 8 weeks after birth. To obtain enough starting materials for study, we chose to produce mutant HbF in our laboratory using recombinant technology. The objective of our work is to use structural biology to characterize the γ V67M mutation in HbF, examine the role of the E11 position in O2 binding in γ chains, and then understand why the mutation caused cyanosis and spleen enlargement.

Hemoglobin Development

Hemoglobin is a complex iron-containing protein in the blood that picks up oxygen from the lungs and carries it to respiring cells; at the same time, it assists in transporting carbon dioxide away from the peripheral tissues. Mammalian red cell hemoglobins are tetramers consisting of four polypeptide chains and four planar prosthetic groups known as heme molecules [2, 3, 9]. Each red blood cell contains about 280 million hemoglobin molecules [7].

Different kinds of hemoglobins are commonly identified by the specific combination of polypeptide chains or subunits within each tetramer. During development and birth, three main types of hemoglobin are expressed (Figure 2). The first type is known as embryonic hemoglobin, which consists of two α and two γ globin chains. The low oxygen conditions of the uterine wall demand a higher oxygen affinity than either HbF or adult hemoglobin confer, but embryonic hemoglobin functions well in this environment. After 10 to 12 weeks of development, the primary form of hemoglobin switches from embryonic hemoglobin to fetal hemoglobin HbF (α2γ2). At this point, the fetus’s red blood cells have access to the oxygen passing though the placenta and umbilical cord. Like embryonic hemoglobin, HbF has a higher oxygen affinity than adult hemoglobin, thus allowing the fetus to extract oxygen from the mother’s blood in the placenta.

When a baby is born and begins to breathe air, γ chain production ceases and β chains are produced which result in adult hemoglobin HbA (α2β2) production. At birth, HbF comprises 50-95% of the child’s hemoglobin. These levels decline to almost zero after six months as adult hemoglobin synthesis is completely activated. Hemoglobin genes exist on Chromosomes 11 and 16 (Figure 3).

Genetic abnormalities can suppress the switch to adult hemoglobin synthesis, resulting in a condition known as hereditary persistence of fetal hemoglobin [6]. In adults, HbF production can be rekindled pharmacologically, which is one of the main treatment options for sickle-cell disease [5]. The mechanisms by which erythroid cells switched from the synthesis of HbF to that of HbA during the neonatal period appeared normal for the patient with HbF Toms River. As a result, this genetic disorder did not persist as a threat to the child.

Analysis of the V67M Mutation

A necessary stage for understanding the Hb Toms River disorder will be acquiring large amounts of the cyanotic child’s mutated fetal hemoglobin. In this case, our only choice was to construct the mutation in vitro with recombinant DNA techniques and then express the mutant γ chain with wild-type α chains in bacteria. The intestinal bacterium, Escherichia coli, is an excellent choice for expressing recombinant proteins because of its high tolerance for synthesizing large amounts of heterologous proteins and the ease of performing site-directed mutagenesis on plasmids that can be taken up by this bacterium. The plasmid system pHE2 was originally developed by Chien Ho’s group at Carnegie Mellon University (Shen et al., 1993) to produce adult hemoglobin, and we obtained the pHE2 expression system for HbF from Professor Kazuhiko Adachi at Children’s Hospital of Pennsylvania (Adachi, 2002). This vector contains one wild-type α gene, along with one wild-type γG gene from human HbF. We created the single-site V67M mutation using the Stratagene QuikChange Site-Directed Mutagenesis Kit. The plasmids for expression of hemoglobins were transformed into E. coli JM109 cells. E. coli cells were grown in 2x YT medium. Expression was induced by adding isopropyl-β-thiogalactopyranoside (IPTG) at 0.1 mM at 37oC and then supplemented with hemin (30 μg/ml). The harvested cell lysate was passed through a Zn2+ binding column, Fast Flow Q-Sepharose column, and finally a Fast Flow S-Sepharose column using an FPLC.

We are currently evaluating the purity and authenticity of our HbF mutant by performing gel electrophoresis and protein sequencing reactions from aliquots of the purified protein. Our long-term goal is to characterize HbF Toms River in terms of its relative stability and O2 affinity with the hope that recombinant technology can help us understand the clinical symptoms of the hemoglobinopathy and perhaps suggest a treatment.

For the past twenty years Dr. John Olson’s laboratory in the Department of Biochemistry & Cell biology at Rice has been examining O2 binding to mutants of mammalian myoglobin and the α and β subunits of HbA. Much of this work has focused on amino acid substitutions within the oxygen binding pocket, including at the valine E11 position. In 1995, an undergraduate honors research student in Dr. Olson’s laboratory, Joshua Warren (Rice BA 1996; Yale PhD 2002), used sperm whale myoglobin (Mb) as a model system to examine the effects of valine E11 to methionine, phenylalanine, tyrosine, and tryptophan mutations on oxygen binding. All four of these amino acids have much larger side chains which fill up the interior portion of the pocket which captures diatomic gases, including O2, CO, and NO. Warren observed dramatic decreases in the rates of O2 uptake and release due to the valine E11 to methionine replacement. Similar marked decreases compared to wild-type Mb were observed for the Phe, Tyr, and Trp mutations, which also decrease the size of the binding pocket [4].

The mechanism of O2 binding to either Mb or a Hb subunit is analogous to catching a baseball in a fielder’s glove. As the thumb opens, by upward and outward movement of the histidine E7 side chain (Figure 4), incoming oxygen can be “caught” in the pocket of the glove. If the available space of the glove is made too small by limiting it with a large amino acid like methionine, the ball or O2 will “bounce” back out of the globin requiring that multiple tries be made until it is finally captured and bound to the iron atom. Thus, we expect the valine E11 to methionine mutation to appreciably slow O2 binding.

At the moment, a structure of the γ valine E11 to methionine mutant has only been simulated (Figure 4) and recombinant HbF Toms River not been characterized. However, based on Warren’s results with Mb, we predict that O2 binding may be slowed so much that red blood cells containing the HbF mutant cannot uptake oxygen quickly enough during passage through the placenta or new born lungs. Consequently, the blood will only be partially saturated with O2 and appear a purplish or cyan color associated with cyanosis. We are currently setting up the HbF expression system and, as a control, show that wild-type γ subunits have kinetic and stability parameters very similar to those of HbA β subunits.

With luck this system will allow complete characterization of the HbF Toms River mutation, and recombinant DNA technology will allow us to study the clinical disorder associated with this hemoglobinopathy without having to obtain the infant’s blood. This in vitro approach represents an important advance in characterizing genetic defects in hemoglobins and will provide a general approach for determining the underlying mechanisms for the phenotype associated with the hemoglobin mutations and possible treatments to restore normal physiological function.

Arindam Sarkar is a sophomore double-majoring in Biochemistry & Cell Biology and Policy Studies at Lovett College.

Acknowledgements

I thank Dr. John S. Olson and Todd Mollan for their useful comments, Ivan Birukou for a helpful discussion on the purification of recombinant hemoglobin, Dr. Jayashree Soman for her expedient provision of images, and once again Todd Mollan for his tireless support and tutelage.

References

1. Adachi, K., Zhao, Y., and Surrey, J. (2002) Assembly of Human Hemoglobin (Hb) beta – and gamma -Globin Chains Expressed in a Cell-free System with alpha -Globin Chains to Form Hb A and Hb F. J. Biol. Chem. 277, 13415-20.
2. Bunn, H.F. and Forget,B.G. (1986) Hemoglobin: Molecular, Genetic and Clinical Aspects. Saunders, Philadelphia, PA.
3. Dickerson, R. E., and Geis, I. (1983) Hemoglobin: Structure, Function, Evolution, and Pathology, pp. 21-26, Benjamin -Cummings Publishing Co., Menlo Park, CA
4. Nienhaus, K., Deng, P., Olson, J.S., Warren, J.J., and Nienhaus, G.U. (2003). Structural Dynamics of Myoglobin: Ligand migration and binding in valine 68 mutants. J. Biol. Chem. 278, 42532-42544.
5. OS Platt. (2008) Hydroxyurea for the treatment of sickle cell anemia. N. Engl. J. Med. 358, 1362–9.
6. S Friedman, E. Schwartz. (1976) Hereditary persistence of foetal haemoglobin with beta-chain synthesis in cis position. Nature. 259, 5539.
7. Sears, Duane W. 1999. Overview of Hemoglobin’s Structure/Function Relationships. February 2005.
8. Shen,T.J., Ho, N.T., Simploaceanu, V., Zhou, M., and Ho, C. (1993) Production of unmodified human adult hemoglobin in Escherichia coli. Proceedings of the National Academy of Science 90, 8108-8112.
9. Steinberg, M.H., Forget, B.G., Higgs, D.R., and Nagel, R.L. Disorders of Hemoglobin . Cambridge University Press, Cambridge, 2001.