Rice University logo
 
 

Archive for the ‘Modern Science’ Category

Reading the Mind: Magic or Medicine?

by: Tina Munjal, ’12

(You may also see the full spread of this article in a PDF.)

In J.K. Rowling’s hit series Harry Potter, lead characters Albus Dumbledore and Lord Voldemort practice Legilimency, or the ability to “mind read” the emotions and thoughts of other people. Such a concept need not be dismissed simply as a crafty plot device in a children’s fairy tale any longer, for radiological science has brought a sort of real-life Legilimency into the realm of possibility through functional magnetic resonance imaging, or fMRI. fMRI operates by the principle that active areas of the brain require increased blood flow due to the greater oxygen requirements of the neurons in those portions. An fMRI machine utilizes the difference in the magnetic properties of oxyhemoglobin and deoxyhemoglobin to determine which regions of the brain are active during particular cognitive tasks [2]. Although fMRI may not be entirely magical, the implications for such a technology are nevertheless vast and varied. Indeed, fMRI is creating a stir in communities of scientists, ethicists, lawyers, educators, and physicians.

For example, fMRI has the potential to be used as a legal arbiter, incriminating the guilty and vindicating the innocent, by detecting patterns of activity in the brain associated with dishonesty. While tests like the polygraph can be manipulated by the subject through altered physiological responses, fMRI read-out is more resistant to conscious control [1]. Two American companies, Cephos and NoLie MRI, have already released their prototypes of fMRI-based lie-detection systems. The founder of Cephos, Steven Laken, claims that his company’s system works accurately 95% of the time. Laken predicts that information acquired through such neuroimaging technologies may be acceptable for use as evidence in court for civil cases as early as next year [5].

Scholars have their reservations, however. Firstly, fMRI is not foolproof. The changes in hemodynamic response that fMRI measures can be affected by a range of factors, including age, fitness, and medications [3]. Furthermore, the majority of deception pattern studies have been conducted on subjects, frequently college students, who are asked to commit inconsequential dishonesty during their scans. The results obtained from these studies cannot necessarily be applied to real-life situations in which the nature of the lies is starkly different and the potential consequences much more grave [3]. Furthermore, the barrier between inner thought and outer communication, the sense of “mental privacy” [1], could quickly be degraded with such potentially intrusive applications of fMRI. Another significant impediment to the advancement of so-called neurolaw is the Fifth Amendment to the U.S. Constitution, which proscribes coerced self-incrimination [5]. Thus, although fMRI could one day be used to exonerate innocent individuals, this benefit would come only at the expense of the rights of the accused.

The controversy is not limited to the courtroom, however, as it could potentially also threaten the classroom dynamic [4]. If fMRI really does reveal the inner workings of the brain and mind, what could possibly serve as a better assessor of academic potential? The SAT would undoubtedly be left in the dust. However, current imaging technology is not yet sensitive to the fact that young minds are still developing and immensely plastic. Thus, the margin of error for fMRI is increased in children [4]. In other words, students ought not to throw away their preparatory books just yet. Tests like the SAT are here to stay for quite some time.

On a related note, it has been suggested that fMRI could one day be used on infants to forecast the likelihood of later brain dysfunction. Once again, however, we find that the developing brain can sometimes be very unpredictable [4]. Prematurely labeling a child as mentally deficient could cause the child to be needlessly stigmatized, as well as deprived of a full education and even of insurance coverage [6]. Perhaps it is better, then, that fMRI is not yet capable of completely accurate prognoses, seeing as that society is yet unprepared to deal with the implications.

True, fMRI is not immediately going to change the way we govern, learn, and diagnose, but the potential is there. While we wait for technology to catch up with the ideas of our times, it is imperative that a solid ethical and logistical framework is constructed so that we can be prepared to put to our benefit, rather than to our detriment, the “magic” of functional neuroimaging once it arrives in full force.

Tina Munjal is a freshman double-majoring in Biochemistry & Cell Biology and Cognitive Sciences at Wiess College.

References

1. Bles, M., Haynes, J.D. Detecting concealed information using brain-imaging technology. Neurocase. 2008, 14, 82-92
2. BOLD functional MRI. http:// lcni.uoregon.edu/~ray/ powerpoints/lecture_10_24.ppt (accessed January 16, 2009)
3. Deceiving the law. Nat. Neurosci. 2008, 24, 1739-1741
4. Downie, J., Schmidt, M., Kenny, N., D’Arcy, R., Hadskis, M., Marshall, J. Paediatric MRI Research Ethics: The Priority Issues. J. Bioethical Inquiry. 2007, 4, 85-91
5. Harmanci, R. Complex brain imaging is making waves in court. San Francisco Chronicle [Online]. October 17, 2008. http://www.sfgate.com/cgi-bin/article.cgi?f=/c/ a/2008/10/17/MN8M13AC0N.DTL (accessed January 16, 2009)
6. Illes, J. Neuroethics in a New Era of Neuroimaging. Am. J. Neuroradiol. 2003, 24, 1739-1741.

The Effect of the Perinatal Condition on Adolescent Anti-Social Behavior

by: Manivel Rengasamy, ’10

(You may also see the full spread of this article in a PDF.)

Adolescence is a tumultuous time developmentally, characterized by dramatic physiological and psychological changes. In addition, adolescence is often a common period for the occurrence of mental disorders, ranging from mood disorders such as depression or bipolar disorder to eating disorders such as anorexia nervosa [2]. Furthermore, a series of developmental time periods may influence or indicate vulnerability to mental disorders in adolescence, and one of the first such stages lies during the perinatal phase, the time five months before to one month after birth. Adverse perinatal events, such as unstable maternal emotional state, poor maternal physical state, substance abuse, or smoking, are likely lead to mental disorders later in life [1].

Although much minor research has been done on such events, previous conclusive long-term research on mildly adverse perinatal events in reference to anti-social behavior has not been conducted. Recently, Nomura et al. investigated the effects of perinatal events on anti-social behavior during adolescence [5]. Over a period of thirty years, 1500 random individuals born to mothers involved in the Johns Hopkins Collaborative Perinatal Study site were examined. Perinatal markers, such as birthweight, head circumference, and 5-minute Apgar scores (scores reflecting infant’s general health), were observed at birth. Lower birthweight, smaller head circumference, and lower Apgar scores would be indicative of sub-primal perinatal conditions. Children with varying perinatal markers were evaluated throughout childhood for neurological problems, communication problems, academic and cognitive abilities, and adolescent anti-social behavior. After statistical analysis of data using a SEM analysis model, indirect associations were found between perinatal birth conditions and most evaluated developmental difficulties in both males and females, most notably anti-social behavior during adolescence. The pathway of perinatal condition to adverse behavioral outcome was confirmed, as poor perinatal status was correlated with a developmental problem. In a repetitive fashion, these developmental problems then positively correlated with each subsequent developmental problem. For example, poor perinatal status correlated with neurological problems, which then correlated with communication problems.

This research is vital since it provides a study of relatively normal-term infants, rather than early or late-term infants. Although early and late-term infants present many mental problems, normal-term infants should also be a target population studied for future problems [3]. Thus, perinatal research has great incentive directly based on its findings. By allowing for a method to directly correlate moderately poor perinatal status to future anti-social behavior, the research can provide better understanding to help prevent potential anti-social behavior and problems. Early in life, such individuals can be identified based on perinatal status and supported properly in subsequent developmental steps. By conclusively identifying that a general pathway of developmental problems starting with poor perinatal status does exist, anti-social difficulties in adolescence can be limited by examining various stages of child development and resolving cognitive and emotional issues. Future study and research could seriously reduce anti-social behavior, which is often a factor in multiple mental disorders.

Children born in perinatal conditions also may have more modifiable outcomes in comparison to early or late-term infants, allowing this research to have a serious profound effect on the lives of many future children. Perinatal research can help in early identification and support of children at risk for neurological, emotional, or cognitive problems. Based on simple scientific tests that could be conducted by nurses, individuals at risk could be recognized.

Early detection of these problems could aid the educational system financially by promoting earlier and less costly treatment. Educators might be able to better lead students toward academic and social success with specialized management. In addition, parents could be more prepared to work with the educational system and their children since these parents will be equipped with knowledge of the difficulties their children might face [4]. Lastly, the healthcare system would be well-served as doctors would be able to efficiently use time to treat medical ailments and not spend time making diagnoses. Although Nomura’s research does not have complete external validity (88% of infants were of African American descent), future studies could also incorporate a more representative population sample, allowing implementation of these beneficial actions based on this research.

In the field of child development, this research has laid a significant stepping stone on which to reduce anti-social behavior in adolescence, as well as other psychological problems throughout childhood. The longitudinal study of Nomura et al. has made significant progress in understanding the long-term effect of the comprehensive perinatal condition and its prediction ability of developmental problems. Future research can help to identify if an actual cause-effect relationship exists between perinatal status and developmental psychological outcomes, such as anti-social disorder. Clearly, the research regarding the perinatal status and its effect on anti-social behavior in adolescents is a major step forward in child development and psychology.

Manivel Rengasamy is a junior double-majoring in Biochemistry & Cell Biology and Psychology at Lovett College.

References

1. Allen N. B., Lewinsohn P. N., & Seeley J. R. Prenatal and perinatal influences on risk for psychopathology in childhood and adolescence. Development and Psychopathology 2008; 10, 513±529
2. Bhatia SK, Bhatia SC. Childhood and adolescent depression. Am Fam Physician 2007; 75: 73-80, 83-84
3. Kenworthy OT, Bess FH, Stahlman MT, Lindstrom DP. Hearing, speech, and language outcome in infants with extreme immaturity. Am J Otol 1987;8:419-425
4. Litt J., Taylor HG, Klein N, Hack M. Learning disabilities in children with very low birthweight: Prevalence, neuropsychological correlates, and educational interventions. Journal of Learning Disabilities 2005;38:130–141
5. Nomura Y., Rajendran K., Brooks-Gunn J., Newcorn JH. Roles of perinatal problems on adolescent antisocial behaviors among children born after 33 completed weeks: a prospective investigation. J Child Psychol Psychiatry 2008; 49:1108–1117
6. Saigal S, Pinelli J, Hoult L, Kim MM, Boyle M. Psychopathology and social competencies of adolescents who were extremely low birth weight. Pediatrics 2003;111:969–975

Tissue-Engineered Articular Cartilage

by: Vasudha Mantravadi, ’12

(You may also see the full spread of this article in a PDF.)

A star football player watches his knee buckle backward, hears an excruciating “pop,” and realizes his worst fear has come true – a meniscal tear. A 65-year-old woman rises from her chair to sense a stiffness in her ankles that forces her to limp, reminding her that her osteoarthritis is growing worse day by day. Despite their broad age gap, these two are victims of similar afflictions that are caused, at least in part, by damage to cartilage.

Cartilage is a compact connective tissue whose chemical and physical properties allow it to offer support and facilitate smooth movement in skeletal joints. It is comprised of cells known as chondrocytes and a surrounding extracellular matrix (ECM) which contains various proteins and other structural components. Together, these extracellular components form a gel-like mesh that gives cartilage its mechanical strength. However, if cartilage is damaged, chondrocytes cannot regenerate it, not only because they are too few, but also because they lack access to blood and nutrients [1]. This calls for a novel tissue engineering approach that allows us to create cartilage to insert into people’s joints that have suffered this kind of damage.

Many labs have worked towards developing such an approach, but in 2006, Rice Professor of Bioengineering Dr. Kyriacos Athanasiou and his group, most notably Dr. Jerry Hu, pioneered an unique self-assembly method of growing cartilage. This method involved seeding a high density of chondrocytes in agarose wells, which served as three-dimensional “molds” for the tissue constructs to form. Unlike most previous attempts, this new method did not rely on the construction of “templates” known as scaffolds to direct tissue formation [2]. The Athanasiou lab has continued to use the self-assembly method in creating articular cartilage (the smooth tissue lining the ends of bones at moveable joints) and in analyzing biochemical and mechanical characteristics during tissue development. They have been able to modify this technique to closely mirror the developmental stages of native articular cartilage and even to identify key points for effective manipulation of the tissue growth.

One of their research projects has focused on the ECM development during cartilage self-assembly and the specific trends in matrix components such as various collagen types, glucose and glycosaminoglycan (GAG) types, and N-cadherin. By relating these observations with the results of mechanical testing, Dr. Athanasiou’s team has identified structure-function relationships in the developmental process that make articular cartilage the strong material it is [3].

To create the cartilage tissue constructs, the lab members first prepared a mixture of bovine articular chondrocytes in a culture medium and inserted it into agarose wells. From this point onward, they conducted the experiment in two phases over a total of 8 weeks, collecting both qualitative and quantitative data on the ECM components. For example, immunofluorescent staining, which involves labeling specific molecules and structures using antibodies, was used to map the general occurrence of N-cadherin and collagen types I, II, and VI. Safranin-O staining was used to label GAG. Shifting to a more specific analysis, the team collected biochemical data to find the net amounts of GAG, glucose, collagen, and individual collagen types contained in each construct. They found that overall GAG concentration increases during the growth of the constructs, but a certain type of GAG known as chondroitin 4-sulfate (CS-4) increases at a faster rate than the related chondroitin 6-sulfate (CS-6). Collagen type II increases and spreads throughout the tissue whereas collagen type VI decreases and ultimately remains in the region immediately outside the cells. N-cadherin, the protein responsible for intercellular adhesion of chondrocytes (and therefore crucial to the initial steps of the self-assembly process), has an increased presence only for a short time during the early stages of development.

The tissue samples from the second phase were exposed to mechanical stresses, such as tension and compression. These tests were then correlated with the biochemical data to reveal some interesting relationships between structural composition and function. For example, the growing trend in GAG levels corresponded with increasing compressive stiffness. The net amount of collagen as well as the collagen type present in the construct determined its tensile strength.

Another important finding in this experiment involved identifying the point at which significant changes in localization of collagen and GAGs occurred, which was around 4 weeks. The lab members believe this may be an effective time to introduce stimuli that will influence the overall development of the constructs, allowing them to enhance the self-assembly process and achieve one of the main goals of this research [3]. This project was aimed mainly at gaining a greater understanding of the self-assembly process in articular cartilage, but future projects will go even further by using this knowledge to modify that process–to really engineer it.

According to one of the lab members, MD/PhD student Sriram Eleswarapu, one of the major problems facing tissue engineering is “making a three-dimensional tissue that is mechanically stable – this is an engineering feat in itself.” However, once the self-assembly process is improved, this feat will come closer to real clinical application. At the same time, the discovery of alternative cell sources such as embryonic, mesenchymal, and induced pluripotent stem cells, each of which are studied by the Athanasiou group, will eliminate the need to extract cells from individuals, making it more feasible to transplant them into patients with damaged cartilage.

Vasudha Mantravadi is a freshman majoring in Bioengineering at Jones College.

References

1. “Creating a More Natural Engineered Cartilage.” http://www.arthritis.org/natural-engineered-cartilage.php (accessed 10/13/08)
2. Jade Boyd. “Rice bioengineers pioneer method to grow replacement cartilage.” http://www.media.rice.edu/media/NewsBot.asp?MODE=VIEW&ID=8375 (accessed 10/14/08)
3. Ofek G, Revell CM, Hu JC, Allison DD, Grande-Allen KJ, et al. (2008) Matrix Development in Self-Assembly of Articular Cartilage. PLoS ONE 3(7): e2795. doi:10.1371/journal.pone.0002795

Immunization and Public Health

by: Trishna Narula, ’12

(You may also see the full spread of this article in a PDF.)

Memories of the summer before I matriculated at Rice would be incomplete without images of heaps and stacks of various forms waiting to be completed – one of which was a health record to be signed by my pediatrician. I decided it would be better to verify myself if I was up to date on my vaccines rather than receive an unanticipated prick at the doctor’s office. Thus, like hundreds of millions of people around the globe, I turned confidently to my omniscient and omnipresent companion Google to answer my queries. I entered the word “immunizations” and precisely 0.16 second later, I was rewarded with various reliable sites such as the CDC’s and the NIH’s. However, preceding this obliging list was the phrase: “related searches: against immunizations” glaring at me in underlined, bold font. Curious, I clicked it. Whatever could this mean? The search engine directed me back in time…

Once upon a time, bacteria and viruses dominated the world. Actually, even today there are more microbes on your hand than there are people on the planet! However, I’m talking more pathogenic bacteria and viruses. Their lives (or “lives” in the case of viruses, which are scientifically classified in the gray area between living and nonliving) consisted of finding an innocent animal or human cell which – once they had invited themselves in through endocytosis – they established residence in and pinched food and nutrients from. If for some reason things didn’t quite work out in their favor, they could easily pack up and drift to a more vulnerable host that would allow them to proliferate exponentially. These parasites were undoubtedly very happy until one fine day in 1796…

Little Miss Smallpox sat in a big ox, eating its host cells away. Along came Edward Jenner, and sat down beside her, and frightened the virus away.

Well, he had to put in a little more effort than just sitting next to her. Jenner successfully tricked the human immune system into believing that a small amount of injected cowpox virus was actually the related but much more lethal virus smallpox, hence inciting an immune response from the person and inoculating the person against smallpox in the future.

The term “vaccination” was thus coined from vacca, Latin for cow, and was initially used solely for poxvirus immunogen injections. Since then, however, vaccines for hepatitis A, hepatitis B, polio, mumps, measles, rubella, diphtheria, pertussis, tetanus, haemophilus influenzae, chicken pox, rotavirus, influenza, meningococcal disease, and pneumonia have been developed, and most recently, for human papillomavirus and shingles. (Ironically, since smallpox was eradicated in 1979, there is currently no vaccine in place for it).

Not only have these immunizations been developed, but they have also been widely implemented, resulting in one of the most successful public health endeavors in the history of mankind. In 2006, approximately 77% of 19 to 35-month-olds in America received all their recommended vaccinations, a record high rate for the nation.

Nevertheless, as Benjamin Dana said, “There has been opposition to every innovation in the history of man.” Microorganisms are once again gaining strength, and we the people are the ones abetting them. More and more individuals are beginning to believe that vaccines are perhaps more medically harmful than they are beneficial, and parents are indeed finding legal ways – such as religious or philosophical waivers – to opt their children out of the requirement.

Undeniably, one of the basic arguments is a fairly legitimate one: babies less than a day old, with feeble immune systems, are being infused with samples of potentially potent microorganisms. Simple reasoning seems to defy the probability of any profitable consequence of this seemingly dangerous practice. However, conducted empirical studies beg to differ. One particularly significant experiment in Denmark published in 2005 illustrates the lack of correlation between increased vaccine exposure and a higher rate of infections. In fact, common fevers in children are a greater threat in compromising the immune system than vaccines are.

A more specificand escalatingly popular issue is the controversy created by claims that the MMR (measles-mumps-rubella) vaccine causes autism. Murmurs of this hypothetical link arose when the rate of autism diagnoses began to climb during the 1980s and 1990s, at the same time that the MMR vaccine was being introduced in Britain and its use mounting in the United States. However, the last straw was an article in The Lancet authored by Andrew Wakefield, M.D., in 1998, that implicitly proposed this connection.

Wakefield’s research was, from the beginning, generally refuted by the scientific and medical community, including the overwhelming majority of the supporting writers of the original article. In addition, a couple of likely coincidences that would skew data must be taken into account. First, though diagnoses of autism have undoubtedly increased dramatically, the phenomenon may very well be attributed to heightened awareness of the disease, leading to better detection or even a wider definition of the same. Alternative explanations include environmental and genetic anomalies. Secondly, the mere temporal similarity of the typical onset of autism and the usual administration of the MMR may have been easily disguised as a link between the two. Indeed, later studies clearly display the absence of an association.

Nevertheless, the damage had been done to the public psyche, and the media further conditioned the concept to an immense extent. Immunizations rates already started dropping. Moreover, there was another factor fueling the pandemonium…

From the 1930s until the turn of the century, thimerosal, or sodium ethylmercurithiosalicylate, was used as a preservative in several vaccines (the MMR not being one, ironically) to thwart contamination by bacteria or fungi. Sounds all fine and dandy, right? Wrong. Take a look at that chemical name again… thimerosal is composed of nearly half mercury by weight. Excessive mercury is capable of incurring substantial brain damage, especially in children, whose actively growing and developing brains are most vulnerable. Although it was soon realized that thimerosal degrades to ethyl mercury rather than the more known and more toxic methyl mercury, the situation remained ambiguous enough for thimerosal to be removed from all required vaccines. By 2001, new vaccines were incorporating various doses in one vial. Though more expensive, these do not contain thimerosal.

Despite this metamorphosis, autism rates have still been on the rise. Indeed, a committee formed by the CDC and WHO has time and time again in the past decade concluded that there is no scientific evidence to support the link between thimerosal and autism.

Evidently, a major shortcoming in our society is the lack of accurate awareness of the above topics. Even if ones goes back to a straightforward cost-to-benefit ratio, the majority of today’s generations hasn’t witnessed any epidemics; thus, they are not able to rationally perceive the full extent of such widespread disease. These individuals find it difficult to grasp the fact that although vaccinations have reduced so many diseases nearly to the point of extinction, diminishing the coverage can easily bring them back to life in full blast.

Unfortunately, a rather large handful of parents seem to be thinking along the lines of, “Even if my kid doesn’t get vaccinated, all the other kids around him will be, so it will shield him anyway.” But if the other kids’ parents catch on to the same free-rider idea, the insulation bubble will quickly begin to disappear, failing to protect even those who genuinely have no other choice – for example, cancer patients with very weak immunity or individuals whose vaccinations were not 100% effective.

The take-home message here is rather transparent: vaccinations are essential. As a rule of thumb, the benefits outweigh the risks (until we can genetically screen people to predict their individual reactions to a vaccine) since a link between vaccines and autism has not been proven. To label a number on that fact, the CDC says that childhood immunizations save 33,000 lives in the U.S. alone annually. Most importantly, we must realize that everyone is on the same side – parents, doctors, policymakers – in the battle against disease and for human life.

Trishna Narula is a freshman double-majoring in Biochemistry & Cell Biology and Psychology at Will Rice College.

References

1. http://www.microbeworld.org/microbes/
2. http://www.cbsnews.com/stories/2007/08/30/health/webmd/main3221902.shtml
3. http://jama.ama-assn.org/cgi/content/abstract/294/6/699
4. http://www.thelancet.com/journals/lancet/article/PIIS0140673697110960/fulltext
5. http://www.dukehealth.org/HealthLibrary/AdviceFromDoctors/YourChildsHealth/mmr_vaccine_and_autism
6. http://www.plosone.org/article/info:doi/10.1371/journal.pone.0003140
7. http://www.niaid.nih.gov/factsheets/thimerosal.htm
8. http://www.fda.gov/cber/vaccine/thimfaq.htm
9. http://www.who.int/vaccine_safety/topics/thiomersal/en/index.html
10. http://blog.roodo.com/trac_mak/43977335.pdf
11. http://www.reuters.com/article/latestCrisis/idUSN04273085

Video Gaming: “Sixth Sense?”

(You may also see the full spread of this article in a PDF.)

How often have you heard concerned parents shout, “Video games will rot your mind?” They lead us to believe that video gaming is detrimental, but this is not necessarily true. Recent studies show fascinating results: a little bit of game play has positive effects on spatial skills, attentional capacity, and even emotional development.

Video gaming has been shown to improve spatial skills. A recent study done by Isabelle Cherney at Creighton University in December of 2008 revealed that playing computer games boosts mental rotation skills involving 3-D and 2-D objects. 61 undergraduates performed mental rotation tests, such as the Vandenberg Mental Rotation Test. The Vandenberg Mental Rotation Test asks subjects to determine if two 3-D objects are identical or not. The objects in question are either a similar shape or a rotated version of the starting object. Test score is then calculated on accuracy of identification and how quickly subjects complete it. After playing Antz (3-D) or Tetris (2-D) over the course of either one or two weeks, the subjects were tested. The study found that, on a scale of 0 to 150, the combined mean test scores for males increased from 124.1 to 143.3 while the mean scores for females increased from 111.1 to 140.1 [1]. Significant increases in spatial skills can mean a possible boost in problem solving ability and can definitely help with everyday life.

By engaging the senses, video games stimulate the brain. Research, conducted in 2003 by members of the University of Rochester, Green and Bavelier, showed that playing action video games can at times enhance “attentional capacity,” which is the ability to effectively filter relevant information from the spectrum of sensory inputs that one could devote attention to. When requested to find a target object in specified rings and to ignore distractor objects outside those rings, video game players were better at processing the presented information and quickly and accurately determining which of the two possible target shapes appeared. The measure of compatibility effect in milliseconds for video game players showed that they had enough perceptual resources to be affected by distractor items at high perceptual loads. This was just one of many indications to the notion that video game players have enhanced attention and focus. In a similar portion of the study, the test subjects were required report how many objects were flashed at them, and the performances of the gamers were nearly thirteen percent more accurate than that of the non-gamers. The overall result of the study was that action video gaming can enhance visual-spatial attention [2].

There are even more important benefits derived from relative sensory immersion, and these are gained in the realm of what Eugenie Shinkle at the University of Westminster describes as the “sixth sense” of “proprioception” [3]. Proprioception processes sensory inputs related in the general feel of the body and involves neural input and hypothalamic response. The hypothalamus is an important area of the brain that uses sensory inputs to regulate hormones that control emotional feelings, such as anger, and metabolic processes, such as hunger or fatigue. Video games offer an opportunity for individuals to engage themselves in an interactive experience by inputting commands through an interface; this enables them to directly manipulate the simulated world, rather than to passively watch a world of media unfold. One of Shinkle’s observations is that by adding emotional aspects to a game through meaningful gameplay, game producers have added importance to proprioceptory stimulation. Gestures exhibited by a player are particular evidence that the hypothalamus is being affected by the experience of playing the game because “altering one’s posture or expression can lead to a change in emotional state” [4]. Shinkle concludes that engaging in video games can contribute to one’s emotional development.

Regarding games that offer a “mediated enactive experience,” otherwise known as role-playing games, Wei Peng at Michigan State University found that an active participant gets more out of the experience than a passive onlooker who simply watches the action unfold [3]. By playing an interactive game, the player can safely simulate situations that could happen in “real life without confronting any real danger.”

In 2004, a group of researchers at the University of Wisconsin-Madison performed an in-depth evaluation of many existing games. This study analyzed how meaningful simulations presented in games can improve one’s way of dealing with the world by integrating applied activity with learning (similar to how proprioception is exercised within certain games). One particularly striking example was given through a campaign for a virtual political office in the online game The Sims Online, which forced its participants to compete with each other, think on their feet, and run an engaging intellectual race to woo over thousands of other players who would act as voters. The conclusion of the analysis was that well constructed video games can provide effective simulations that have the potential to “change the landscape of education as we know it” by providing environments that allow players to “participate in valued communities of practice and as a result develop the ways of thinking that organize those practices.”

So what does all of this boil down to? These findings strengthen the evidence that gaming is not necessarily a bad thing. Exercising your senses and learning a little self-control while simultaneously engaging in a bit of fun can produce a positive outcome. The benefits of video gaming have been shown to improve spatial skills, attentional capacity, proprioceptory orientation, and even strengthen learning by providing an applied environment.

Miguel Quirch is a sophomore Biochemistry & Cell Biology major at Martel College.

References

1. Cherney, Isabelle D. 2008. Mom, Let Me Play More Computer Games: They Improve My Mental Rotation Skills. Sex Roles 59: 776-786.
2. C. Shawn Green & Daphne Bavelier. 2003. Action video game modifies visual selective attention. Nature 423: 534-537.
3. Peng, W. 2008. The Mediational Role of Identification in the Relationship between Experience Mode and Self-Efficacy: Enactive Role-Playing versus Passive Observation. Cyberpsychology & Behavior 11: 649-652.
4. Shinkle, Eugenie. 2008. Video games, emotion and the six senses. Media Culture Society 30: 907-915.

‘Nanorust’ and Clean Water

by: Eui Whan Moon, Baker ’11

Looking back on the years I have lived in the central Asian country of Kyrgyzstan, one of my lasting memories takes place in the small, crowded kitchen of our home. Upon our first arrival to the country, my parents and I were strictly warned by other Korean expatriates that it was unsafe to consume tap water that had not been processed. Consequently, my family promptly learned and adopted the purification process. The task is simple, consisting of boiling, cooling, decanting, and filtration (aided by our faithful Brita® jug). Since drinking water is an every day necessity, this ritual has run its course in our kitchen on an endless loop for fourteen years to this very day. If I was bored, I could always count on finding a ten-gallon pot of cooling water on the kitchen stove, waiting to be filtered. All this to say, water is a daily essential, and clean water even more so. Yet many regions of the world do not have this luxury; in effect, countless lives are claimed each year by the harmful pollutants present in the drinking water. However, a recently discovered property of iron oxides may hold a promising application to alleviating the problem of water contamination around the globe.

Here in the United States, every municipal water system is accountable to Environmental Protection Agency (EPA) regulations to provide safe drinking water to each home.1 Nonetheless, there are many developing countries and cities that cannot afford such a system, and thus to pour a cup of tap water is perilous for their inhabitants. In a number of Asian countries groundwater — a key resource for rural communities — is contaminated with dangerous levels of harmful microorganisms, inorganic chemicals, and organic chemicals. A giant among these various contaminants is the inorganic chemical, arsenic.2 Arsenic – a colorless, odorless, and tasteless element – causes various health defects upon ingestions, including skin damage, failure of the circulatory system, and cancer.1 On March 2005, The World Bank and Water and Sanitation Program presented a report on their comprehensive study of groundwater in Asian countries. The study revealed that parts of Bangladesh, China, India, Vietnam, Nepal, and Myanmar were just a few of the numerous hotspots for arsenic contamination. Overall, an estimated sixty-five million people were subject to health risks due to critical levels of arsenic in water.2 “Crisis” understates the potentially deadly arsenic situation at hand in Asia, as well as many other parts of the world not mentioned. Furthermore, Asia is only one of many other regions facing this problem. The influence of arsenic contamination is so lethal and widespread that the word “crisis” understates the situation at hand.

Meanwhile in the western hemisphere, a handful of scientists at Rice University’s Center for Biological and Environmental Nanotechnology (CBEN) are studying a promising remedy for arsenic contamination. The key to this solution was the discovery of strange magnetic properties among nano-scale magnetite particles. Magnetite (Fe3O4) is an iron oxide, much like rust, so the term “nanorust” was coined for magnetite nano-particles. Whereas rust (FeO) only contains iron in +2 oxidation states, Fe3O4 has iron in both +2 and +3 states. Nanorust crystals are so tiny that they are measured in the scale of nanometers (10-9 meters). At this size, magnetite was found to behave differently under the influence of a magnetic field in comparison to its counterpart with more conventional dimensions. For instance, based on observations made on bulk material it would take an extraordinarily large magnetic field to extract magnetite nanoparticles suspended in a solution. Yet Dr. Vicki Colvin, the director of CBEN, and her colleagues discovered to their surprise that removing nanorust particles from a solution required only a small electric field. Dr. Colvin told Chemical and Engineering News, “We were surprised to find that we didn’t need large elecromagnets to move our nanoparticles, and in some cases, handheld magnets could do the trick.”3 Another property of nanorust is that it has a very high surface area per mass because the particles are so small. The principle here is simple. Picture a very large copper sphere and a very small copper sphere. Most of the atoms in the large sphere are inside the sphere, not on its surface; the opposite is true for the small sphere. Therefore, comparing a one kilogram copper sphere to a kilogram of nano-sized spheres will show that the latter mass has much more surface area. In the case of nanorust particles, a kilogram has enough surface area to cover an entire football field.4

So how exactly do the properties of nano-scale magnetite help solve the arsenic problem? Arsenic has a high affinity toward iron oxides. Regardless, the use of conventional-sized iron oxides for purifying water has largely proven to be impractical, inefficient, and tedious.4 On the other hand, using nanocrystals of iron oxide for the job is an entirely different matter. Due to its exceptional surface area (which translates to more binding spots for arsenic) a given mass of magnetite particles twelve nanometers in diameter can capture one hundred times more arsenic than the equal mass of the larger iron oxide counterparts used in filters today.5 Once all the arsenic has been collected by the magnetite, these nanoparticles are easily removed from the water using a simple hand magnet. In describing this process to Science Daily, Dr. Colvin claimed, “Arsenic contamination in drinking water is a global problem and while there are ways to remove arsenic, they require extensive hardware and high-pressure pumps that run on electricity . . . . Our approach is simple and requires no electricity.”6 The only problem is the cost; nanorust particles assembled from pure laboratory chemicals can be very expensive. Key ingredients for water soluble nanorust are rust (FeO) and a fatty acid mixture (oleic acid). Heating rust yields magnetite. A double layer coating of oleic acid is then applied to each magnetite nanoparticle; by doing so, the nanoparticles will not stick to one another but instead be dispersed throughout water. Cafer Yuvez, a graduate student working under Dr. Colvin, is developing a method to create nanorust particles using inexpensive household items such as rust, olive oil (source of fatty acid), drain opener, and vinegar. Once perfected, this method will drastically reduce the production cost of nanorust from $2,624 to $21.5 per kilogram.7 Perhaps one day millions of people threatened by arsenic will be saved by nanorust cooked up on their kitchen stoves.

References

1. Drinking water Contaminants. http://www.epa.gov/safewater/contaminants/index.html (accessed 03/20/08), part of United States Environmental Protection Agency.
2. Arsenic Contamination in Asia. http://siteresources.worldbank.org/INTSAREGTOPWATRES/Resources/ARSENIC_BRIEF.pdf (accessed 03/20/08), part of a World Bank and Water and Sanitation program Report.
3. Cleaning Water With ‘Nanorust’. http://pubs.acs.org/cen/news/84/i46/8446notw4.html (accessed 03/20/08), part of Chemical and Engineering News.
4. Merali, Zeeya. Cooking up ‘Nanorust’ Could Purify Water. http://technology.newscientist.com/article.ns?id=dn10496&print=true (accessed 03/20/08), part of New Scientist Tech.
5. Feder, Barnaby J. Rustlike Crystals Found to Cleanse Water of Arsenic Cheaply. http://www.nytimes.com/2006/11/10/science/10rust.html (accessed 03/20/08), part of The New York Times.

The Embryonic Stem Cell Controversy

by: Sergio Jaramillo, Sid Richardson ’10

Most innovative scientific ideas initially face vehement opposition, followed by a gradual process of testing and evolving into a universally accepted dogma. Religion and politics — and their delineation of ethics and morality — have historically played a large role in the scientific process, and in bioethics today, they continue to assume a large role in morally controversial topics, one of the most prominent of which is embryonic stem cell research. The current Embryonic Stem Cell (ESC) revolution was ignited by Dr. James A. Thompson at the University of Wisconsin in 1998. The call to explore stem cells’ potential to regenerate tissues and more effectively treat Parkinson’s, Alzheimer’s, among many other diseases, has sparked a great deal of political and ethical controversy. Due to the debate over the ethics of using fertilized embryos in stem cell research, this issue has also led to groundbreaking findings on the possibility of using other kinds of cells to derive the same benefits.

The isolation of the Human ESC was a great scientific achievement with an even greater therapeutic potential. An embryonic stem cell is defined as a stem cell derived from the inner cell mass at the blastocyst stage of a fertilized oocyte. The cells are considered pluripotent, because they posses the capacity to divide and produce cells derived from the three germ layers (ectoderm, endoderm, and mesoderm). However, ESCs are limited in their inability to differentiate into the trophoblasts, which give rise to the placenta. Scientists believe that one day damaged tissue will be regenerated by those “shimmering spheres of human potential” (National Geographic, July 2006) to spark a renaissance of life in damaged tissue.1 However, there are risks: if ESCs are left undifferentiated in the body, they can differentiate uncontrollably, causing a Teratoma, a benign tumor. On the other hand, ESCs may offer a cure for Parkinson’s, Alzheimer’s, Diabetes, cancer, Hemophilia, and many other diseases.

In light of the fact that ESCs have been the center of much controversy, useful insight may be gained through studying the history of religions and secular debate on the beginning of life. In scientific terms, an embryo is defined as the stage when the dividing cells in the recently-fertilized egg gain control of their cellular machinery by beginning to produce their own enzymes; this stage occurs in the cleaving cell one to two days after conception. The current Roman Catholic Church’s stand on the human status of the blastocyst is that “the ablation of the inner cell mass (ICM) of the blastocyst, which critically and irredeemably damages the human embryo, curtailing its development, is gravely immoral and consequently is gravely illicit”.2 However, this has not always been the stand of the Roman Catholic Church; until the twelfth century it believed in Saint Augustine’s doctrine of “the quickening” that stated the embryo acquires humanhood through the acquisition of sentience. From the twelfth century to 1869, the Church believed in Saint Thomas Aquinas’s doctrine of “delayed hominization,” which stated that of the vegetative, animal, and rational stages, the last stage must be reached for embryo to fully attain humanhood. In 1869, Pope Pius IX decreed the beginning of life at the moment of conception due to knowledge that fertilization involved sperm and eggs. Thus, for nearly 2,000 years The Church had accepted the doctrine of “late humanhood” in one form or another, and the new cannon has been in place for about 150 years. Just as the Roman Catholic Church has not been a picture of unwavering conviction on the status of life and its beginning, neither have other faiths such as Islam, Judaism, and protestant branches of Christianity in which religious and secular scholars are split on this issue.

Some regard ESC research as tantamount to abortion, claiming that the blastocyst is being destroyed. According to Anne Kiessling, a stem cell researcher, the irony of the situation is that when opponents of research on fertilized eggs and early embryonic development try to stop such research, they actually inhibit the scientific understanding of the process. This in turn impedes the development of new ways to prevent pregnancy, perpetuating the need for abortion itself.2 Many proponents of ESC research find that conferring humanhood to the original blastocyst is problematic because until it has reached the morula stage (14th day blastocyst), the blastocyst has the potential to split, forming twins. On this basis, they reason that it is at least counterintuitive for a person to split in two, so one cannot confer humanhood until the potential for a unique human is actually manifested in the embryo through a propensity to acquire a unique biological personality. Whether or not ESC research implies terminating life, one thing is certain: there are hundreds of thousands of frozen blastocysts in fertility clinics around the world destined to be destroyed.

Because of the ethical controversies regarding the use of fertilized embryos for the isolation of embryonic stem cells, scientists looked for a way to turn a somatic (or body) cell into a human ESC. In theory, all somatic cells in the body are the same because they contain the same number of chromosomes. The amount of gene expression guides the differentiation of cells along commitment pathways to their lineages: from embryonic to adult stem cells and then to somatic cells. In other words, genes are upregulated and downregulated in different sequences, with varying levels yielding a cell type. Therefore, it is theoretically possible to take a fibroblast (skin) cell and turn it into a pluripotent cell by inducing the expression of the necessary factors that match with the expression profile of ESC. The induction of pluripotency on fibroblasts was first performed in mice by two different research groups, creating high expectations for the possibility of reproducing the work with human fibroblasts. Then, on November 11, 2007, the research journals Cell and Science each published an article on the induction of pluripotency on human fibroblasts by two different independent research groups. This news made the headlines across the world as a monumental achievement in ESC research. Some scientists in the embryonic stem cell research community insist that this breakthrough is far from a replacement to embryonic stem cell research, and that the methods to induce pluripotency are problematic for therapeutic application in human beings.

Despite the controversies, it is indisputable that there lies a great potential in stem cell research to cure a myriad of diseases. The new research on fibroblasts is part of a strong social and scientific movement to make such treatments possible, and perhaps it marks a beginning of the end to the ethical controversy.

References

1. Rick Weiss. Stem Cells, The Power to Divide. National Geographic Magazine. July 2005.
2. Kiessling, A., Anderson, S.C. Human Embryonic Stem Cells (second edition). Jones & Bartlett: October 31, 2006.

Exploring Carbon Nanotubes

by: Varun Rajan, Brown ’09

In a list of the most important materials in nanotechnology today, carbon nanotubes are ranked near the top.1 Consisting solely of carbon atoms linked in a hexagonal pattern, these cylindrical molecules are far longer than they are wide, similar to rods or ropes. The prefix nano- means one billionth, which refers to a nanotube’s diameter of a few nanometers. In part because of their unique size, shape, and structure, carbon nanotubes (CNTs) are exceedingly versatile. Proposed areas of application for CNTs range from electronics and semiconductors, to molecular-level microscopes and sensors, to hydrogen storage and batteries.2 However, CNTs’ special combination of strength, low density, and ductility has also led to speculation about their role as “superstrong materials”3 in structural applications, such as a “space elevator”4. Before these science fiction claims become engineering feats, basic questions about carbon nanotubes’ mechanical behavior must be answered. In his research over the past twelve years, Dr. Boris Yakobson, a Rice Professor in Materials Science, has tackled several fundamental questions concerning the failure of nanotubes and the behavior of dislocations. The materials science term “dislocation” refers to a line imperfection or defect in the arrangement of atoms in a CNT; dislocations are important because they affect a material’s mechanical properties.5

How do nanotubes fail?

Determining how carbon nanotubes fail, or lose their capacity to support loads, is a complicated yet important matter; it must be fully understood before nanotubes are used in structural applications. In the article “Assessing Carbon Nanotube Strength” Yakobson, along with then-postdoctoral student Traian Dumitrica and graduate student Ming Hua, used computer simulation to model CNTs and investigate their failure.

According to Yakobson, simulations are valuable because “in principle you have full access to the details of the structure.” He added that one of the advantages of simulations is that the researcher has full control over the experimental conditions and variables. With respect to carbon nanotube failure, some of the most pertinent variables include the level and duration of the applied load, as well as the nanotube’s temperature, diameter, and chiral angle – the angle ranging from 0 to 30 degrees that describes how a carbon nanotube is rolled up from a graphite sheet. In addition to affecting the nanotube’s strain (stretch) at failure, these variables also determine the process by which it breaks. Yakobson found that two different mechanisms can cause nanotube failure. At low temperatures, the mechanical failure dominates as the bonds between adjacent carbon atoms literally snap. On the other hand, high temperatures induce the bonds within the nanotube’s carbon hexagons to flip, causing the hexagons to become five- and seven-sided figures. This effect weakens the nanotube structure and initiates a sequence of processes that culminate in complete nanotube failure. Combining the results of numerical simulations and analytical techniques, Yakobson constructed a carbon nanotube strength map: a single figure that illustrates the relationship between the relevant variables, the failure mechanisms, and the failure strain (figure 2). The significance of Yakobson’s research led to its publication as the cover article for the April 18, 2006 issue of Proceedings of the National Academy of Sciences.

How do dislocations behave in carbon nanotubes?

Another area covered by Yakobson’s work was the study of dislocation behaviors. While dislocation dynamics in multiwalled carbon nanotubes might seem to be a subject only a materials scientist could love, this area of research has great bearing on CNT use in mechanical and electronic applications. Multiwalled carbon nanotubes (MWCNTs) can be visualized as many single-walled carbon nanotubes, arranged concentrically like tree-trunk rings and interacting with each other via weak intermolecular forces.6,7 Although somewhat difficult to visualize, dislocations — defects in the atomic structure of a CNT —can be viewed as objects that can move, climb, and collide with one another, leading to the term “dislocation dynamics.”
In his research on this topic, Yakobson collaborated with J.Y. Huang at Sandia National Laboratory and F. Ding, a research scientist in Yakobson’s group. Their experimental procedure involves heating a MWCNT to approximately 2000° C, which causes its dislocations to mobilize. Using a transmission electron microscope, they tracked the motion and interaction of these dislocations over time. Yakobson said that this powerful microscope gives the experimenters “nearly atomic resolution.” A resolving power of this magnitude creates spectacular images that reveal a rather odd phenomenon: one can observe a dislocation climbing a carbon nanotube wall and combining with a dislocation on an adjacent wall to form a larger dislocation loop, which then continues to climb (figure 3). If this process is repeated throughout the MWCNT, its entire structure becomes a mixture of ‘nanocracks’ and kinks. More importantly, adjacent walls are cross-linked together by covalent bonds, whereas formerly they were only weakly connected by van der Waals forces. Cross-linking is important because it “lock[s] the walls together in one entity,” Yakobson said. As a result, there is an increased possibility for transfer between walls, and current can be driven through the cross-linked junction. He also believes that cross-linking is somehow responsible for the mechanical strength in MWCNTs, because the concentric cylinders can no longer easily slide past one another.
Yakobson’s research is simultaneously old and new. It is old because subjects such as dislocation dynamics and material failure are well-understood for many materials. Yet, it is also ingenuous because knowledge in these fields cannot be extended easily to carbon nanotubes.8 Researchers in this field are treading on unexplored ground that will bring the nanotube a step closer toward its applications.

References

1.Arnall, A.H. Future Technologies, Today’s Choices: Nanotechnology, Artificial Intelligence and Robotics; A Technical, Political and Institutional Map of Emerging Technologies, Greenpeace Environmental Trust, London, 2003.
2.Collins, P.; Avouris, P. Scientific American 2000, 62-69.
3.Chae, H.; Kumar, S. Science 2008, 319, 908-909.
4.University of Delaware. Space Tourism To Rocket In This Century, Researchers Predict. http://www.sciencedaily.com/releases/2008/02/080222095432.htm (accessed 02/27/08), part of Science Daily. http://www.sciencedaily.com/ (accessed 02/27/08).
5.Dumitrica, T.; Hua, M.; Yakobson, B. Proc. Nat. Aca. Sci. 2006, 103, 6105-6109.
6.Cumings, J.; Zettl, A. Science 2000, 289, 602-604.
7.Baughman, R.,; Zakhidov, A.; de Heer, W. Science 2002, 297, 787-792.
8.Huang, J.Y.; Ding, F.; Yakobson, B. Physical Review Letters 2008, 100, 035503.

The Promise of Adult Neurogenesis

by: Elianne Ortiz, Hanszen ’11

Contrary to popular belief, the number of neurons in the human body is not fixed at birth. Through a process called neurogenesis, stem cells continue to differentiate into neurons throughout adulthood at specific regions of the brain — namely the olfactory bulb and hippocampus. The olfactory bulb is responsible for smell, while the hippocampus plays a role in long-term memory. Neurons in the hippocampus proliferate with enough mental and physical exercise, but their purpose had long remained unknown. A recent study by a team of investigators at the Salk Institute in La Jolla, California, finally shows some promise of shedding light on this mystery. They created a method to genetically engineer mice to turn off the processes that are responsible for neurogenesis.

In an earlier study, researchers Ronald M. Evans, Ph.D., and Fred H. Gage, Ph.D., had previously discovered a crucial mechanism that kept adult neuronal stem cells in an undifferentiated, proliferative state.3,4 After learning more about its specific function, Dr. Chun-Li Zhang, postdoctoral fellow at the Salk Institute, was able to turn off this mechanism in mice. This procedure effectively suppressed neurogenesis in the hippocampus, allowing the scientists to identify how newborn neurons affect brain functions.

The altered mice were then put through a series of behavioral and cognitive tests, one of which yielded results that conflicted with those of the control population. The Morris water maze is used to study the formation of learning strategies and spatial memories. Mice placed in deep water try to find a submerged platform with the help of cues marked along the walls of the pool. As the test was repeated, a normal mouse remembered the cues in order to locate the platform with relative ease. On the contrary, the mice that were genetically engineered to lack neurogenesis showed slower improvement. These mice experienced significant difficulty in finding the submerged platform, and their performance declined as the task was made more demanding. Although these mice were slower at forming efficient strategies, their behavior was very similar to that of the control mice by the end of the experiment. “It’s not that they didn’t learn, they were just slower at learning the task and didn’t retain as much as their normal counterparts,” Zhang said in an interview with Science Daily.1

This study suggests that neurogenesis has a specific role in the long-term storage of spatial memory, the part of memory responsible for processing and recording information from the environment. “Whatever these new neurons are doing it is not controlling whether or not these animals learn. But these new cells are regulating the efficiency and the strategy that they use to solve the problem,” Gage explained to Science Daily.1

In previous studies, Gage and his team were able to show how certain activities trigger neurogenesis. For instance, increased mental and physical exercise led to an increased amount of stem cells differentiating into neurons.3 Many of these neurons did not survive, although continued stimulation increased the number that did. Zhang’s water maze study now provides an important tool for others to study the effects of decreased neurogenesis. Previous attempts using radiation and mitotic inhibitors shut down not just neurogenesis but all cell division, and thus led to contradictory results.

The significance of Zhang’s research on adult neurogenesis is well founded. There are over 5 million people in the U.S who suffer from Alzheimer’s disease and other neurodegenerative disorders. Studies such as these give hopes that there may be a way to influence memory function by stimulating neurogenesis with therapeutic drugs. When perfected, these methods will allow a debilitating disease, such as Alzheimer’s, to be cured with a drug, followed by physical and mental stimulation. Many neurodegenerative disorders have no cure, and symptoms can only be alleviated for a short period of time before damage becomes severe. These groundbreaking studies have the potential to save millions from the trauma of memory deterioration.

References

1. Science Daily. http://www.sciencedaily.com/releases/2008/01/080130150525.htm. (accessed Feb 26, 2008).

2. Shi Y, Chichung Lie D, Taupin P, Nakashima K, Ray J, Yu RT, Gage FH, Evans RM. Expression and function of orphan nuclear receptor TLX in adult neural stem cell. Nature. 1 Jan 2004.

3. Tashiro A, Makino H, Gage FH. Experience-specific functional modification of the dentate gyrus through adult neurogenesis: a critical period during an immature stage. The Journal of Neuroscience. 21 Mar 2007.

4. Zhang CL, Zou Y, He W, Gage FH, Evans RM. A role for adult TLX-positive neural stem cells in learning and behaviour. Nature. 21 Feb 2008.