Rice University logo
 
 

Archive for the ‘Volume 3 (Spring 2010)’ Category

Forensic Dermatopathology

by: Harina Vin

Forensic science, as defined by Wikipedia, is “the application of a broad spectrum of sciences to answer questions of interest to a legal system.” One such science, currently undeveloped for use in forensics, is dermatology and dermatopathology. Examination of the skin is a critical part of the forensic examination, as the skin has the potential to reveal signs of internal disease or external trauma, an approximate time of death or injury, or clues to the identity of an individual.

Detection of Drug and Chemical Use

The simplest way to detect drug and chemical use of an individual in forensic cases is to analyze hair and nail samples. This method is particularly useful because it is easily and non-invasively collected.3 Biological substances accumulate in hair and nail, where they can be measured even in small sample sizes. The hair and nails may also give a history of drug intake and abuse, as well as toxin exposure. The nail on the large toe reflect body exposure to toxins up to 12 months previous.3 In the same way, long scalp hair may provide retrospective information of the previous 5 to 7 years. In fact, the basic chemical composition of the hair shaft and nail plate is not influenced by changes in the blood chemistry or by exposure to chemicals which occurred after hair and nail formation.9, 10

Transverse leukonychia, also known as Mee’s lines, are nail abnormalites caused by toxins like arsenic and thallium.13, 16 Fingernail clippings of victims have been utilized in looking for the DNA of aggressors in cases where the victims struggled to defend themselves.8 The identification, however, may be difficult due to the fact that the aggressor’s DNA is often too low in quantity to be detected. Certain hair loss patters are indicative of some poisoning which may lead to the diagnosis.5

In a case study, a 16-year-old girl was admitted to a hospital for weakness and weight, and was found to have white, transverse, and nonpalpable lines on each of her nails. Dermatology was consulted regarding the patient’s abnormal nails, consistent with Mee’s lines. The patient’s hair and nails were analyzed and arsenic was found, initiating a criminal investigation. The investigation uncovered the following:

“Upon taking further history, her father had died recently. His body was exhumed and arsenic was found. It was then discovered that the previous husband of the child’s mother had died in middle age. His body was exhumed and arsenic was again found. About that time, the mother disappeared. She was found several years later and charged and convicted of murder and attempted murder.”3 In this case, dermatologic symptoms allowed the discovery of a legal incident using hair and nails.

Detection of Abuse

In living patients, certain chronic and recurring dermatologic symptoms can occur in patients following psychologic trauma events. Examples are cutaneous sensory flashbacks, autonomic hyperarousal (with symptoms such as profuse sweating or flare-up of an underlying stress-reactive dermatosis), conversion symptoms (such as numbness, pain, or other medically unexplained cutaneous symptoms), and cutaneous self-injury (manifesting in many forms, including trichotillomania, dermatitis artefacta, and neurotic excoriations— tension-reducing behaviors in patients who have posttraumatic stress disorder).4 These dermatologic symptoms can be indicative of psychological abuse and pursued by forensic scientist.urn victims also exhibit certain skin characteristics. In electrical lesions, cell cytoplasm appears homogeneous often with a peculiar white color in hematoxylin-eosin stained sections, and overall morphology of the cells are affected, likely because of pH shifts in the cells.15

Interestingly, forensic dermatology can also be used to rule out abuse. A case study was conducted of an 82-year-old man in a nursing home receiving treatment for colon cancer. One night, he unexpectedly dies. His right cheek is described as “red with blisters” and his family is concerned that he had experienced a burn to his face due to neglect or abuse from the nursing home staff. An autopsy is performed which confirms a diagnosis of metastatic adenocarcinoma of the colon. Skin biopsies are also performed, and cells of the epidermis stain positively for antivaricella zoster antibody. Afterward, the viral tissue culture grows varicella zoster virus. The findings established “herpes zoster as the cause of the man’s right cheek erythematous-based blisters and excluded the possibility of a burn, neglect, or abuse by the nursing home staff.”2

Time of Death

Attempts to estimate age of skin bruises is of considerable importance in forensic pathology. One way age is estimated is using time-dependent changes of color. For example, a bruise older than 18 hours is predicted to appear yellow in light-microscope histology. 6 Apoptosis activity found in the post-mortem skin is most probably reliable way of determining age of an injury.12 In general, the apoptotic process is associated with condensation of cytoplasm followed by phagocytosis and digestion by surrounding cells, the steady state mass of a tissue being related to the balance between cell formation (mitosis) and cell destruction (via apoptosis).7 Thus, the relationship betweenapoptosis and physical injury in skin, can aid forensic dermatologist in identifying time of death. mass of a tissue being related to the balance between cell formation (mitosis) and cell destruction (via apoptosis).7 Thus, the relationship betweenapoptosis and physical injury in skin, can aid forensic dermatologist in identifying time of death.

Victim Identification

Some obvious method for identification of unknown human bodies are visual identification, fingerprints, and DNA fingerprinting.17 One more important and less spoken forensic tool to establish identity in an unknown deceased is occupational skin lesions, that is, lesions acquired in the course of a person’s daily profession. Different occupations produce characteristic effect on different parts of body due to use of tools or machines or exposure to different chemicals in the working environment. Examples of profession-specific dermatologic indicators include “ rough hands seen in manual labourer involved in construction work, excavated chest in a cobbler, callosities of finger tips in a stenographer, callosities of palm at the base of fingers in butchers, burn scars over the back of both hands seen in blacksmiths, involuntary permanent tattooing of micro particles of coal found on the hands of the labourers involved in mining industry.”11

A rare case was reported, where the identity of an unknown elderly male who committed suicide by hanging was established based on the symmetrical distribution and pattern of skin lesions acquired during the course of his occupation. As a coconut tree climber, he gripped the coconut tree with both hands and feet, and then pushed up the body to climb higher. This resulted in intermittent pressure over the forearm skin, palms, and soles in response to friction, causing deposition of thickened, vertically oriented collagen bundles in papillary dermis, resulting in lichenification.1 The family of the individual later confirmed his identity.

The Future

Within the next decade, it is likely that the implementation of currently available and new techniques for the diagnosis and relevance of skin and mucosal conditions will continue to provide significant scientific advances in this promising area of forensics.2 These fields are awaiting further definition, categorization, and investigation.

References


1. Adams R.M. Occupational skin disease. In: I.M. Freedberg, A.Z. Eisen, K. Wolff, K.F. Austen, L.A. Goldsmith, S.I. Katz and al. et, Editors, Fitzpatrick’s Dermatology in General Medicine, McGraw-Hill, New York (1999), pp. 1609–1620.
2. Cohen P.R. Forensic Examiner [Online] Fall 2009.
3. Daniel C.R.; Piraccini B.M.; Tosti A. Journal of the American Academy of Dermatology. [Online] 2004, 50.2, 258-261.
4. Gupta M.A.; Lanius R.A.; Van der Kolk B.A. Dermatologic Clinics. [Online] 2005, 23.4, 649-656.
5. Hubler W.R. South Med J. [Online] 1966, 59, 436–442.
6. Langlois N.E. and Gresham G.A. Forensic Sci. Int. [Online] 1991, 50, 227–238.
7. Olson P.L. and Everell M.A. J. Cutaneous Path. [Online] 1975, 2, 53–57.
8. Oz C. and Zamir A. J Forensic Sci. [Online] 2000, 45, 158–160. Palmeri A; Pichini S; Pacifici R; Zuccaro P; Lopez A. Clin Pharmacokinet. [Online] 2000, 38, 95-110.
9. Pichini S; Altieri I; Zuccaro P; Pacifici R. Clin Pharmacokinet. [Online] 1996, 30, 222-228.
10. Polson C.J.Identification. In: C.J. Polson and Gee DJ, Editors, The Essentials of Forensic Medicine, Pergamon press, Oxford (1973), pp. 85–87.
11. Sawaguchi T; Jasani B; Kobayashi M; Knight B. Forensic Science International. [Online] 2000, 108.3, 187-203.
12. Seavolt M.B.; R.A. Sano; K. Levin; C. Camisa, Int J Dermatol. [Online] 2002, 41, 399–401.
13. Shetty B.S.; Rao, J; Samer K.S.; Salian P.R., Shetty M. Forensic Science International. [Online] 2009, 183.1, 17-20.
14. Thomsen H.K.; Nielsen D.O.; Aalund O.; Nielsen K.G.; Karlsmark T. Genefke I.K. Forensic Science International [Online] 1981, 17.2, 145-152.
15. Tromme I.; Van Neste D.; Dobbelaere F.; Bouffioux B.; Courtin C.; Dugernier T. Br J Dermatol [Online] 1998, 138, 321–325.
16. Weedn V.W. Clin. Lab. Med. [Online] 1998, 18, 115–137.

The Black Hole War

by: Phillip Choi

Most people don’t know or even care too much about black holes. The basic premise of a black hole is easy to grasp: A black hole is an incredibly compact mass that has so much gravity that nothing, even light, can escape from it. However, what happens to something when it does get sucked into a black hole? The answer to this seemingly innocuous question was debated by Leonard Susskind and Stephen Hawking for more than two decades. Leonard Susskind’s account of his twenty three year battle with Stephen Hawking over the fundamental nature of black holes is brilliantly recounted in his “Black Hole War”.

Most people don’t know or even care too much about black holes. The basic premise of a black hole is easy to grasp: A black hole is an incredibly compact mass that has so much gravity that nothing, even light, can escape from it. However, what happens to something when it does get sucked into a black hole? The answer to this seemingly innocuous question was debated by Leonard Susskind and Stephen Hawking for more than two decades. Leonard Susskind’s account of his twenty three year battle with Stephen Hawking over the fundamental nature of black holes is brilliantly recounted in his The Black Hole War.

The Black Hole War started in 1981 during an informal meeting of eminent physicists in San Francisco at the mansion of a rich New Age self help guru. During his presentation, Stephen Hawking proposed that information that falls into a black hole is eventually lost permanently in black hole evaporation. If information really was lost forever within black holes, one of the key principles holding quantum mechanics together, information conservation, would be violated. Information conservation is a simple law that states information cannot be lost or gained in the universe. That day, only Susskind and Gerard ‘t Hooft, a preeminent Dutch physicist, were troubled by Hawking’s conclusion. Despite all the mental acrobatics normally required for theoretical physics, Susskind and ‘t Hooft felt there was something intuitively wrong with losing information within a black hole because losing information is the same as increasing entropy. This entropy would become heat, which meant, “if Stephen was right, empty space would heat up to a thousand billion billion billion degrees in a tiny fraction of a second.” This bit of reasoning was the beginning of twenty three years of work to conclusively prove Hawking wrong.

Susskind recognizes that as a theoretical physicist writing for a lay audience, he must provide a significant amount of background knowledge to make the Black Hole War somewhat intelligible. He starts at the most basic level possible by talking about the scale of numbers that the reader will be exposed to in the book. Then, Susskind deftly explains numerous concepts important to gravity, thermodynamics, relativity, and quantum mechanics, such as tidal forces, the Uncertainty Principle, the Equivalence Principle, time dilation, entropy etc. The presence of many illustrations and the absence of unnecessary equations are greatly appreciated.

The term “black hole” only came into the physics lexicon during the mid 1960s; the term is widely credited to John Wheeler. Before then, black holes were called gravitationally completely collapsed stars. The existence of black holes was first postulated by John Mitchell and Pierre-Simon Laplace in the late 18th century. However, it was not until the latter half of the 20th century that black holes were became more than extremely dense dead stars. It was agreed upon that there were only two key components when describing a black hole: the singularity and the horizon. The singularity is the center of the black hole where the mass of infinitely high density resides. The horizon is the invisible spherical boundary between the black star and the rest of the universe; once an object crosses the horizon, it will be pulled into the singularity and destroyed. Black holes were thought to be cold balls of mass that would be permanent fixtures in the cosmos. Now, it is recognized that black holes have can radiate heat, split into smaller black holes, evaporate into nothing and a whole bunch of other lively activities.

Susskind’s quest to prove Hawking wrong about the destruction of information in black holes results in two major theories: Black Hole Complementarity and the Holographic Principle. It should be noted that he worked with ‘t Hooft on the Holographic Principle. Both theories get around Hawking’s assertion by stating that even information that is sucked into a black hole is not actually “in” the black hole.

At the most basic level, Black Hole Complementarity states that what one observes while inside a black hole is different from what one observes outside the same black hole. For example, let’s say a person falls into a black hole while his friend watches in horror from a nearby spaceship. The friend outside the black hole in the spaceship will see his friend swallowed up and disintegrated as he passes the horizon. However, the person falling into the black hole will experience something entirely different: He will cross the horizon of the black hole without noticing any changes and continues living. He won’t feel his body being torn to bits by the black hole. (That happens when he reaches the singularity.) There seems to be a paradox here as two different events take place at the same moment. However, this paradox can be reconciled by the fact that, even theoretically, there is no way to experimentally prove there is a paradox because it would be impossible for the two observers to ever come together and compare their observations about crossing the horizon. Black Hole Complementarity is analogous to the wave-particle duality of light. In the same way that light is a particle and light is a wave, the observer crossing the horizon is destroyed and he is not. Black Hole Complementarity circumvents Hawking’s claim because even though we on the outside might see the bit of information being sucked into the black hole and evaporating as Hawking radiation, that bit of information actually still exists in the black hole.

The Holographic Principle is the second major development that came from Susskind’s war with Hawking. A hologram is a three dimensional image created by focusing light projected from a two-dimensional film surrounding the image. Susskind and ‘t Hooft’s Holographic Principle states that “everything inside a region of space can be described by bits of information restricted to the boundary.” This means that any object in the universe, even the largest ones like stars, galaxies, and the universe itself are simply holograms created from a massive, information-containing shell that surrounds the object. Whenever an object enters the boundary of one of these shells, the information regarding the object is encoded into the boundary and the boundary expands in proportion to the amount of information added. Incredible as it sounds, the Holographic Principle is now an accepted tool of theoretical physics. The Holographic principle solidly proves Hawking’s claim to be incorrect. If the holographic shell of a black hole is taken to be its boundary and the black hole itself is the object, any object or bit of information that enters the black hole will encode its information into the shell before entering the black hole. Thus, the information is conserved and not lost.

As Susskind and ‘t Hooft develop a the tools necessary to prove that black holes don’t swallow up and obliterate information, the nature of modern research becomes increasingly clear. Although Susskind and ‘t Hooft thought up the concepts of Black Hole Complementarity and the Holographic Principle, they must collaborate with many individuals to make the theories mathematically consistent with the physics establishment. Contributions from Cumrun Vafa, Ashoke Sen, Joseph Polchinski, Andrew, Strominger, Juan Maldacena, and Claudio Bunster were necessary to making Susskind and ‘t Hooft’s ideas sound. Brainstorming often took place at conferences where the various physicists would have relatively casual conversations about the problems they were having in their research. The general consensus regarding theories was taken via a straw poll at many of these conventions. Research didn’t take place in the offices of individual professors. Instead, it seems like a large family bickering and working together to find the “truth.”

Throughout the book, Susskind’s personality comes through loud and clear when he isn’t straightforwardly explaining the intricacies of his theories. It is very apparent that he is a grizzled physics professor willing to tell the war as he saw it without censoring himself. He recounts his encounters with other giants of theoretical physics, such as Richard Feynman, as a young student. When explaining the nature of black holes, he takes some time to discuss the reality behind some popular uses of black holes, such as time travel and teleportation. The consensus is that we can travel forward in time, but not backward, and worms holes, if they exist, could never actually be used to travel to an alternate universe or part of this universe. What’s most interesting is his characterization of Stephen Hawking. Very few people would have the audacity to call Hawking dense or get annoyed at how slowly it takes him to formulate answers using his synthesizer, but Susskind does. He’s willing to see the situation for what it is, not what it should be.

The Black Hole War is both highly informative and entertaining. Patience is required to understand the physics taking place, but it is rewarding to see how Susskind systematically attacks Hawking’s claim. I highly recommend The Black Hole War is you are looking to learn a little bit about how black holes work and how physicists wage war with each other.

Autism: Did the word make you think of mitochondria?

by: Christine Younan

For the past number of years, autism has been known to the general public as some vague disease of the brain, constituting various symptoms including abnormal social interaction, inappropriate communication, and the presence of stereotyped behaviors.1 This disease presents itself in a way that deems its origins worthy of some grand, subtle mismatched neural networks in the brain; however, it is only more recently that pediatric neurologists are looking beyond this seemingly too obvious origin of dysfunction to a more miniscule, mundane organelle: the mitochondrion. The mitochondrion provides a large alleyway for researchers of autism to begin their quest in seeking out the potential correlation autistic patients have with specific abnormalities in the mitochondria, an organelle present in numbers reaching the hundreds in nearly every cell of the body.

The mitochondrion: I remember that having some sort of significance in general biology. Let’s refresh. The mitochondrion is ultimately the cell’s powerhouse, producing vast quantities of adenosine triphosphate or ATP. In doing so, the mitochondria of the body break down carbohydrates and fats through a series of enzyme cascades and an electron transport chain. Because of the mitochondria’s significant presence in the body and the necessary metabolic processes it performs, the consequences of defects in this precious organelle already seem remotely dreary. When the mitochondria are deficient in their performance, “final end products from several metabolic systems may build up, causing in the metabolic systems themselves to shut down”.1 Additionally, the mitochondria play a role in signaling the cell to apoptose (analogous to suicide). When the mitochondria becomes defected, it can signal the cells to prematurely begin apoptosis, which leads to invariably unhealthy effects on the tissue in the vicinity, and ultimately on the organism as a whole.1 A defect in the mitochondrion can also “create reactive oxygen species that can be damaging to neighboring healthy tissues, as well as to individual cell function”.1 Ultimately, the effects appear quite challenging for cells with such idiosyncratic mitochondrion, but how do these effects interact to produce the symptoms of autism?

Because the mitochondria are primarily involved in production of cellular energy, the overall effect of the ill-suited mitochondria takes effect on the organs of the body that undergo ATP intensive processes. These processes include those of thinking, digesting, fighting off the latest strand of flu, and in short, many seemingly basic actions that require plethora of energy and complexity underlying their effectiveness. Therefore, a child with mitochondrial disorder may exhibit “developmental delay, loss of developmental milestones (i.e., regression), seizures, muscle weakness, gastrointestinal abnormalities, and immune dysfunction”.1 The overall presentation of these symptoms strongly resembles many of the multi-symptom effects of autism. Without even specific tests, the evident correlation of these two disorders makes it very probable that a child with autism may have mitochondrial disorder.

The actual diagnosis of mitochondrial disorder involves a series of blood tests and urine sampling that are often unrealized by the lack of information. However, one significant marker of mitochondrial disorder manifested in autistic children is the elevated levels of lactic acid. The mitochondrion cannot break down the pyruvate in the cell, and so the pyruvate then is left with no other choice than to decompose into lactic acid. A study on the relationship between elevated levels of lactic acid, as well as other traditional markers, associated to mitochondrial disease with autism were performed in Portugal; children believed to have autism underwent a muscle biopsy and nearly 7% were found to have confirmed mitochondrial disease.1 This remarkable study is one of the beginnings of an increasingly founded belief that a subset of autistic children may have mitochondrial dysfunction been overlooked.

Additionally, the connection of mitochondrial disease and autism can be vividly observed in the form of treatment applicable in both cases. Although medical treatments have truly improved, an exact remedy to mitochondrial disorder has not been prescribed. Certain vitamin supplements and diet adjustments, such as coenzyme Q10 (idebenone), are the essence of the treatment of this disease. The specific nuances in the prescription of treatments are “based upon rational biochemistry and knowledge of what vitamins/ cofactors may supplement the defective enzyme machinery or which diet may provide the best fuels for the specific disorder”.1 Such a treatment is analogous to the diet adjustments delineated for children diagnosed with autism.

Clearly, autism and mitochondrial disease contain a series of parallel symptoms, diagnoses, and treatments that maintain the idea that mitochondrial disease may be the precipitating factor of autism in many children. The mitochondria, with its vast sequence of enzyme cascades and energy formation and supply through ATP, provides a substantial means by which researchers of autism may begin to pin down specifics in what causes the symptoms of a disease which affects a significant proportion of young children today. Much work has yet to be done in this relatively novel field, but no doubt, an overwhelming amount of potential will be founded in the inner-workings of the mitochondria in curing autism.

Reference

1. Frye, R.; Poling, J. Mitochondrial Disorders and Autism. Biomedical, 2009.

Sleep Deprivation: Normal Lifestyle or Dangerous Epidemic?

by: Susan Xie

Imagine an epidemic severe enough to impair memory and cognition, increase the risk of occupational and automobile injury, and alter typical brain responses to the extent where they resemble those of people with psychiatric disorders.1 After less than a week of experiencing these effects, otherwise healthy people succumb to a pre-diabetic state, a negative emotional outlook, and an inability to function normally in day-to-day activities.3,4 Furthermore, several large-scale studies from all over the world have reported an association between this epidemic and heart disease, high blood pressure, stroke, and obesity.3 With health risks this substantial, it is only logical that we would want to take all possible measures to prevent ourselves from becoming the next victims. But what if this so-called “epidemic” already runs rampant, especially on college campuses? What if our own habits and lifestyles naturally enable it to flourish and gradually claim student after student?

If you have ever stumbled back to your dorm early in the morning after camping out in Fondren Library all night, guzzling energy drinks or coffee to get yourself through a cram session, you are probably already well-acquainted with this bane of all things productive: sleep deprivation. If there is one thing we know for sure about sleep, it is that attaining rest is absolutely crucial, no matter how elusive or impossible it seems on some days. After all, biologically speaking, animals are most vulnerable when sleeping, yet every animal that has been studied thus far requires sleep, indicating that its importance outweighs the evolutionary risks of periodically losing consciousness.3 A well-established demonstration of the basic necessity of sleep was completed through a series of studies done in the 1980s. When a group of rats was kept awake indefinitely, individual rats started dying after only five days from sleep deprivation—the same amount of time that the animals would have lasted had they instead been subjected to food deprivation.3 Considering that sleep was shown to be just as essential to survival as sustenance, it should not be surprising that humans spend about one-third of their lives sleeping—time that, rather than being viewed as a repository of extra hours that we can access on busy days, should be valued as a prerequisite to achieving our full potential and performance levels.

However, asking college students to consistently get a full night’s sleep will seem more like a taunt for many rather than advice that can be feasibly followed. Even as students shoulder increasingly heavier loads of academic, extracurricular, and job commitments, they might still harbor reservations toward reducing the amount of quality time devoted to friends or plain procrastination. Under these conditions, the daily requirement of 7-9 hours of sleep for healthy adults becomes nothing more than an ideal. Even worse, one particularly busy day has the potential to initiate a vicious cycle: when we get inadequate rest for one night, we become less productive the next day and are thus forced to stay up later again to finish our work. Further complicating matters is the fact that students routinely overestimate how quickly they can finish assignments, when in reality, projects typically require twice the number of predicted days to complete.8 Much of the time, however, we are not even completely aware of how impaired or inefficient we have become. In fact, staying up late is often considered a “badge of honor” among college students, with the all-nighter regarded as a major “rite of passage.”8

Still, it does not take extensive scientific investigation to figure out that depriving ourselves of sleep is ultimately detrimental. All health risks aside, there is still the question of whether substituting sleep with late-night studying has true academic worth. While most students will contend that it is worthwhile to sacrifice sleep for extra time to learn and review material before an exam or to finish a paper, studies indicate that pulling all-nighters is associated with slightly lower GPAs, decreased alertness and delayed reactions, increased tendencies to make mistakes, and impaired abilities to think, process, and recall information.1,8 These are all factors that compromise a student’s overall performance and somewhat defeat the purpose of staying up later in the first place.

The effects of sleep deprivation also have serious implications regarding learning and memory. In a study conducted by Matthew Walker, director of the Sleep and Neuroimaging Lab at the University of California, Berkeley, college students who had been awake for more than 24 hours performed 40% worse when memorizing lists of words than they would have with a night of sleep.3 Additionally, Walker found that sleep actually enhances memories—after a full night of rest, students not only came back the next day feeling refreshed, but also performed better than they did the day before. After experimental subjects learned and repeatedly typed a random sequence of numbers, they were tested at different times of the day to determine the extent and effectiveness of learning. The group that learned the sequence in the morning and was tested 12 hours later exhibited about the same performance level. On the other hand, the group that learned the sequence late in the day and was tested after a night of sleep showed a 20- 30% improved performance.3 Therefore, according to Walker’s findings, the notion that we must stay awake longer to get more work done is misguided and counterproductive. Giving into the temptation of sleep is not necessarily yielding to weakness; rather, it is making a rational decision that is most conducive to accomplishing the maximum amount of work in the long run.

Aside from curbing students’ academic potential, sleep deprivation also warps our personalities and strains our interaction with others. After only one night of complete or even partial sleep deprivation, people exhibit much stronger negative emotions the next day and are likelier to remember bad experiences as opposed to positive ones.4 According to a key study conducted by radiologist Seung-Schik Yoo of Brigham and Women’s Hospital, people who had been deprived of sleep for 35 hours showed much greater activation of the amygdala (a primitive part of the brain that controls emotional arousal) when viewing such upsetting images as pictures of mutilated animals.4 This change in brain activity in response to lost sleep is so significant that even after several nights of quality sleep, people still have a “horrible bias shift” regarding their memories of the day following insufficient sleep.4 Not surprisingly, sleep deprivation can lead to tension, depression, and confusion; it is also associated with a generally lower satisfaction with life, an increased number of absences from classes or work, and a heightened risk of inadvertent injuries and death.8 Currently, the definite long-term effects of chronic sleep deprivation on learning, emotion, social relationships, and health have not been completely elucidated, but problems controlling impulses and emotions, coupled with sleep deprivation, are likely to lead to a “negative spiral” of fatigue, fluctuating emotions, and risky behavior.2

So now that we know the grave extent of this selfinflicted epidemic, where do we go from here? Is there a cure, or are there at least preventative measures? If so, are they within our reach? Most of us can manage one or two late nights well enough with caffeine and a sufficient amount of ambition, interspersed with adequate physical activity to keep us awake, but such arrangements are never long-term. The most important consideration to keep in mind is that sleep deprivation has a cumulative effect. After just a single night of 4-6 hours of sleep (not to mention anything less than that amount), people already begin to experience difficulties in remembering information, thinking quickly, and reacting in a timely manner; thereafter, each additional night of sleep deprivation only contributes an added burden to the growing sleep debt.3 At some point, these deficits accumulate to an extent where the only effective cure is to—you guessed it—sleep away the cognitive impairment. This is especially true of people who are chronically sleep-deprived. Even though they tend to harbor the erroneous belief that they have trained themselves to work continuously and function perfectly fine with fewer hours of sleep per night, they actually overestimate their own limitations considerably, much like people under the influence of alcohol, and exhibit no such convenient adaptation to their hectic lifestyles. 3

From these findings, it seems that no man-made alternative will ever fully emulate the profound effect that sleep has on various aspects of our mental and physical health. Every single one of us would most likely pounce at the opportunity to schedule some sort of session into our busy agendas that promises to improve our emotional outlook, decrease stress, boost memory and cognition, revive our physical wellbeing, and promote alertness and efficiency in handling everyday tasks. In fact, if a full night of sleep is unachievable, “power naps” can do just that. Taking the time to nap for about 20-30 minutes during the day boosts our working memory, information retention, alertness, and stamina; best of all, it eliminates drowsiness, counts toward the average total of 7.5-8 hours of sleep we need daily, and helps us make the most of limited time.3,5 So next time you need a boost, reaching for that energy drink may not be the best answer—especially not when you have a healthier, more effective option that is completely free of charge. In battling the ravages of this so-called epidemic, sleep is indeed one of the most underused, yet powerful, tools in our arsenal. Now, it is up to us to determine for ourselves whether groggily wading through that assignment would truly be more productive than catching a bit of proper, refreshing shut-eye.

References

1. Breus, Michael J. Sleep Habits: More Important Than You Think. http://www.webmd.com/sleep-disorders/guide/importantsleep- habits (accessed 10/25/09), article from WebMD. http://www.webmd.com/ (accessed 10/25/09).
2. Carpenter, Siri. Sleep deprivation may be undermining teen health. Monitor on Psychology [Online] 2001, 32, 9. http://www. apa.org/monitor/oct01/sleepteen.html (accessed 10/25/09).
3. Finkelstein, Shari (producer). The Science Of Sleep. 6/15/08. http://www.cbsnews.com/stories/2008/03/14/60minutes/ main3939721.shtml (accessed 10/25/09), article from CBS News. http://www.cbsnews.com/ (accessed 10/25/09).
4. Foreman, Judy. Sleep deprivation and negative emotions. 8/3/09. http://www.boston.com/news/health/articles/2009/08/03/ sleep_deprivation_and_negative_emotions/ (accessed 10/25/09), article from The Boston Globe. http://www.boston.com/ news/ (accessed 10/25/09).
5. Marten, Sylvia. How to Power Nap at Work. 4/9/07. http://www.spine-health.com/blog/ergonomics/how-power-nap-work (accessed 11/26/09).
6. Myers, David G. Psychology: Ninth Edition; Worth Publishers: New York, 2008.
7. Student Health Services, Texas A&M University. Sleep and the College Student. 9/08. http://healthed.tamu.edu/pdfs/General/ sleep.pdf (accessed 10/25/09).
8. Yahalom, Tali. College students’ performance suffers from lack of sleep. 9/17/07. http://www.usatoday.com/news/ health/2007-09-16-sleep-deprivation_N.htm (accessed 10/25/09), article from USA Today. http://www.usatoday.com/ (accessed 10/25/09).

Fruitless, Behavioral Genetics

by: James Liu

Nearly a century ago, Morgan, Sturtevant, Bridges, and Mullur, pioneers in Drosophila melanogaster (fruit fly) research, used this animal model to elucidate the chromosome model of heredity. They ultimately determined that chromosomes indeed carry genes and are responsible for sex determination. Since then, D. melanogaster has been one of the most widely and thoroughly studied animal models in biology. Much of this is due to the ease in working with D. melanogaster in the lab: its short generation time, lack of genetic recombination in males, and so on. Furthermore, advances in genetic manipulation in generating mutants (P-element insertions and irradiation) has allowed fly researchers to study the function of many genes in Drosophila.

One fascinating field that has gained momentum in fly research is the convergence of behavioral studies and molecular genetics–looking at how genes individually and acting in cohort, influence fly behavior. Determining the link between genetic defect and behavioral alteration is much more complex than merely knocking down a gene and observing the resulting defect. Researchers have studied the relationship between genes and behavior at a holistic perspective, beginning with behavioral defect in these mutations, down to the neurological aberrations, and further down to the molecular changes in these mutations.

Fruitless (fru), for example, is one of these genes. In fruit flies, it is well known as one of the sex determination genes. It was first identified in viable male fru mutants that demonstrated abnormal mating behaviors. Mutant fru males failed to distinguish between male and female mates, whereas mutant fru females are unaffected.1 Various fru mutants that ranges from complete abolition of the gene to merely alteration of the gene have since been created. The “degree” of fru mutation has shown to affect specific steps in males courting behavior. Severe deficiency in fru can abolish male mating behavior entirely. 2 Less severe mutations have resulted in abnormalities in the male’s ability to produce the correct “song” during the mating ritual.3

Fru is also expressed in the central nervous system (CNS) and its expression is the highest when the fly is at its pupal phase. It has been speculated that sensory information transmitted to the CNS is processed by neurophils, the cells that express fru. At a cellular level, fru may be involved in shaping sex-specific neuronal circuitry essential for proper mating behavior.4 However, there remains much to be uncovered in the neurological role of fru in influencing D. melanogaster behavior.

Much is yet to be accomplished in understanding how genes affect behavior. D. melanogaster is an efficient model for this type of study because of the nature of D. melanogaster mating. Mating is divided into specific steps, each requiring a set of visual, tactile, olfactory, and auditory cues. Studying mutants in courting behavior has provided many interesting behavioral abnormalities. But we have yet to make a clear link between how genes coordinate neuronal development and circuitry, which ultimately influences behavior. Many neuronal diseases that we observe today–Alzheimer’s, Parkinson’s, Huntington’s– all have a genetic component of the disease. Thus understanding how genes can affect neurodevelopment and ultimately influence behavior can provide important lessons for understanding how our own genes affect behavior.

Bibliography

1. Solkolwski, M. B., Nature Reviews 2001, 2, 879-890.
2. Goodwin S. F.; Taylor B. J.; Villella A.; Foss M.; Ryner L. C; Baker B. S.; Hall J. C., Genetics, 2000, 154, 725-745.
3. Villella, A.; Gailey, D. A.; Berwald B.; Ohshima S.; Barnes P. T.; Hall J. C., Genetics, 1997, 147, 1107-1130.
4. Baker. B.; Taylor B.; Hall J., Cell, 2001, 105, 13-24.

Programmable Nano Bio Chips

by: Eva Thomas

We would not have advanced in medicine if it were not for the diagnosis of conditions such as diabetes, cancer and other diseases through technology. Biosensors are devices that provide information about analytes, the chemicals to be investigated within the human body. The information provided by the biosensors can be analyzed to give diagnoses. These biosensors act with the same processes as the senses of smell and taste in that receptors on the tongue and in the nasal cavity bind specifically to detect chemicals. In the field of bioengineering, there is a movement towards miniaturized design which is driven by the necessity to be more cost-effective through the mass production of disposable devices. Beyond lower cost and increased availability, nanoscale assay technology offers rapid analysis and less waste of sample volumes.

Jesse V. Jokerst and John T. McDevitt have teamed up to research the creation of nanoscale diagnostics. Jokerst recently obtained his doctorate degree in chemistry with McDevitt at the University of Texas – Austin before McDevitt moved to Rice University. They condensed an entire clinical laboratory into a chip to form a Programmable Nano-Bio-Chip (PNBC) or what they have dubbed the “Lab- On-A-Chip.”6, 2, 1 The goal of the McDevitt lab at Rice University is to optimize this device by making it more accessible and affordable and by discovering more applications of the biosensors. The advantages of programmability include its modularity (with interchangeable parts), flexibility and the ability to process and learn new biomarker signatures, which are indicators of various conditions. The nanochip achieves large effects overcoming noise within the body even at nanoscale particularly because of the parallelization of the assay.1

The device was first designed as an electronic “taste chip”, as it emulates taste buds in its recognition of various substrates.1 A set of design criteria was considered during the development of the nano biosensors. A low cost of diagnosis would make the process more accessible. The employment of the device should be simple and easy to use. The device should be as small as possible to decrease invasiveness. The ability to test for multiple analytes would make the device more flexible and efficient. A quick reaction with a short turn around time would allow for a quicker response from the device.2 The device is intended to be $1 per test with a one-time use, disposable labcard. However, as the field of nanotechnology is still growing, there are difficulties in reaching these goals. The PNBCs are entirely self contained which make it different from other nanosensors which require laboratory support in clinics. Bioreactions occur and are subsequently measured on the chip.2 Agarose beads, acting as spongy capture agents, are conditioned, perhaps with antibodies, to be sensitive to the desired analytes.5 The 3D nature of the beads allows for more rapid isolation of the analytes than planar arrays of other assay methods2 Microfluids are implemented with the beads in order to both treat the sample and detect the target molecules.1 Fluorescent dye can be added as an indicator; the strength of fluorescence of the beads can be correlated with the concentration of the analyte.2 The device can be modified from a chemical processing unit to a cellular processing unit by replacing the panel with beads with a polycarbon membrane which acts as a filter at a larger scale than the chemical processing unit with beads.2 The membrane-based cellular processing can be used for cell counting and typing.3

The McDevitt lab also examines biomarkers, analytes and the conditions that they indicate. The PNBCs can already be applied to a wide array of analytes: pH, electrolytes, short polypeptides, metal cations, sugars, biological cofactors, cytokines, toxins, proteins, antibodies, and oligonucleotides.1 A PNBC can be implemented with saliva to diagnose acute myocardial infarction by using the biomarkers C-reactive protein, myoglobin and myeloperoxidase.4

The field of nanochips shows a lot of promise in all of its medical applications and the effect it could have on healthcare. The PNBCs are shown to have applications in HIV monitoring, chest pain diagnosis and gynecological cancer screening.6 The programmable nano bio chips have potential cancer and cardiac applications.1 The use of the PNBCs with the Electrocardiogram (ECG) is shown to be more reliable than ECG alone when diagnosing cardiac abnormalities. 4 The modular design gives it the ability to apply to new tests and increased interest in the nano-bio-chip will lead to progress in all diagnostic research.1 Improvements in microchip production will allow for even smaller, less invasive biochips and decrease in material costs are also to be expected.2 The nanochip achieves large effects even at nanoscale.1 The PNBCs have global implications because an affordable, mass-produced clinical test could revolutionize diagnosis worldwide.2 The PNBC is self-contained analysis method meaning that it can be used for diagnosis even in underdeveloped areas lacking laboratory supplies. A PNBC designed for HIV detection is particularly needed in developing nations.

References

1. Jokerst, J. V.; Floriano, P. N.; Christodoulides, N.; McDevitt, J. T.; Jacobson, J. W.;Bhangwandin B. D. Programmable Nano-Bio-Chip Sensors: Analytical Meets Clinical Analytical Chemistry. 2010, 82, 4533-4538.
2. Jokerst, J. V.; McDevitt, J. T. Programmable nano-bio-chips: multifunctional clinical tools for use at the point-of-care Technology Report: Nanomedicine. 2010, 5, 143-155.
3. Jokerst, J. V.; Camp, J. P.; Wong, J.; Lennart, A.; Pollard, A. A.; Floriano, P. N.; Christodoulides, N.; Simmons, G. W.; Zhou Y.; Ali, M. F.; McDevitt, J. T. Nano-Scale Control of Sensing Elements in the Programmable Nano-Bio-Chip Small. 2010, 1-41.
4 Floriano, P. N.; Christodoulides, N.; Miller, C. S.; Ebersole, J. L.; Spertus, J.; Rose, B. G.; Kinane, D. F.; Novak, M.J.; Steinhubl, S.; Acosta, S.; Mohanty, S.; Dharshan, P.; Yeh, C.; Redding, S.; Furmaga, W.; McDevitt, J. T. Clinical Chemistry. 2009, 55, 1530-1538.
5 Jokerst, J. V.; Raamanathan, A.; Christodoulides, N.; Pollard, A. A.; Simmons, G. W.; Wong, J.; Gage, C.; Furmaga, W. B.; Redding, S. W.; McDevitt, J. T. Nano-biochips for high performance multiplexed protein detection: Determinations of cancer biomarkers in serum and saliva using quantum dot bioconjugate labels Biosensors & Bioelectronics. 2009, 24, 3622-3629.
6 Jokerst, J. V.; Bhagwandin, B. D.; Jacobson, J. W.; McDevitt, J. T. Clinical applications of a programmable nano-bio-chip Clinical Laboratory International. 2009, 6, 24-27.

Two Photons Are Better Than One

by: Rahul Kamath

An Exciting Way to Image Neuronal Change Over Time

Andy feels his legs touch the bedpost when he wakes up in the morning. He also feels them when he sits down in his wheelchair every day. The problem is, Andy’s legs are both amputated. He is experiencing a condition known as phantom limbs; although an individual is missing an arm or a leg, he or she still feels its presence. Scientists believe that this condition is caused by the activation of underlying somatosensory cortex connections in regions adjacent to the “arm” or “leg” section of the brain.3 In other words, when one part of the body is touched or stimulated, the patient feels as if the amputated, non-existent body part is “touched” as well.

But studying such an issue proves quite difficult. In fact, many diseases that involve changes in the brain are tough to observe because of the minute changes in brain tissue and neural circuitry. However, there is a relatively new method that is helping to uncover exactly what happens in a diseased brain: two-photon microscopy.

This technique involves the absorption of two photons of light by target molecules; as these molecules leave their excited state, they emit light. This emitted light is then utilized to observe minute and particular images of tissue. Microscopes that utilize single photon absorption techniques provide less resolution and inferior spatial imaging, and have the potential to damage the sample.2 The reason for the late arrival of twophoton microscopy to the research scene was the previous unavailability of extremely high powered lasers.2

The most interesting and no doubt revolutionary aspect of this microscope, concerning neural tissue in particular, is its ability to image in vivo tissue, tissue from live organisms. For example, by using surgical procedures, a clear glass lid can be placed over a mouse’s brain in an area where observation is desired. This allows the microscope’s laser to easily image the surface of the brain without harming the mouse.

In addition, the same tissue can be imaged over lengthy periods of time, opening up the possibility for longitudinal studies that track changes in neural structure. For example, the figure above shows a set of image slices taken by the microscope. The images have been overlapped onto one another in order to give a full picture of the neurons in that region of the brain. The clarity is great enough that one can observe a dendritic spine, a small protrusion from a neuron’s dendrite, shown by the yellow arrow and enlarged in panel D. Scientists could therefore determine if this spine were to stop fluorescing properly due to a lack of calcium in the cell. The other arrows in the image mark the location of various parts of the neuron that can also be tracked over long periods of time.

This type of imagery is valuable for other reasons as well. With normal microscopes, neural tissue readily scatters light, resulting in poor imaging. The two-photon microscope employs longer wavelength and near-infrared light when scanning tissue, eliminating this problem4 In addition, scientists have developed techniques through which high resolution images can be obtained from live specimens. These images are usually free of motion artifacts or problems in the image due to movements by the organism under investigation.1

The two-photon microscope’s advanced imaging technique allows for the proper study of underlying cortical brain function in organisms, without the exacerbating usage of sedatives. Any sort of anesthetic would normally interfere with background processing that occurs in neural tissue and hinder attempts to gain a better understanding of the methods with which neurons integrate and work together.

The only thing we can do now for Andy is to try and explain to him the cause of his problem. Hopefully with advances like the two-photon microscope, the cause and reason for conditions such as these can be determined, leading the way to potential therapeutic and maybe even prophylactic measures, especially for conditions such as Alzheimer’s and Rett’s.

Unnatural Numb3r

by: Thomas Sprague

the modern neuroscience of an ancient cognitive capacity

In 2551 BC, one of the most profound engineering accomplishments of the ancient world was made. 2.3 million stones, each weighing over 100,000 pounds, were carefully extracted, smoothed, and hauled up a complex system of ramps to create a lasting monument for the pharaoh Khufu: the Great Pyramid at Giza. For thousands of years, the 450 foot-tall structure stood as the tallest man-made monument in the world. Even now, its construction remains so precise that you cannot slide a postcard between two stones. All of this was accomplished without any modern engineering equipment—no calculators, computers, or advanced geographic surveying technology. Our unique advanced cognitive abilities – those that enable engineers and architects to undertake such endeavors – go back thousands of years. But where do these faculties come from?

When we consider those things that separate humans from nonhuman primates, it is easy to focus on human-specific capacities like language and music. These profound behavioral adaptations are certainly the result of immense evolutionary progress. But, as we all learned in high school biology, we share about 96% of our genetic material with our closest living relative, the chimpanzee. If all of our species’ new cognitive proficiencies are in fact brand new developments in the human genome, then that miniscule genetic change would have needed to go quite a long way. But mother nature is smarter than this – modern neuroscience is unveiling how the story of our escape from the jungle into the vast networked civilization we know today could be told in terms of adaptation rather than innovation: why reinvent the wheel when instead you could add tires and use a new alloy?

In the past twenty years, neuroscientists around the world have begun to examine where our understanding of and ability to work with complex symbolic numerical and mathematical concepts originated. To what extent can our numerical skills be called “unique” from the rest of the animal kingdom? What was it that changed – possibly within that 4% of genetic code – that adorned our brains with the capacity to count and comprehend precise numerical quantities?

First, let’s look more deeply into where our numerical knowledge originated from an evolutionary perspective. To do so, we can examine approximate numerical cognition in humans compared with other animals. Consider a small group of chimpanzees exploring a treacherous part of the jungle inhabited by a different hostile group. How does this group of explorers make a decision whether or not to engage in conflict with the hostiles? A basic numerical competence is required on the part of the animals to make such a judgment – the invaders must compare the number of members among their group to the number of enemies they may need to fight. In experimental settings, a group of chimpanzees does not approach a simulated intruder (signaled by a fake call played from a speaker) unless the group numbers three or more.1 Again, mother nature is efficient: three appears to be the number of adult male chimpanzees needed to kill another chimpanzee.2

But it’s not only monkeys which show this kind of numerical understanding – fish, bees, cats, dogs, and human infants show similar preferences for “more”, especially when detecting novelty or making decisions.3-6 These findings appear to be rather simple and self-explanatory – of course animals can distinguish more from less – but they tell us something quite deep. Understanding number, at least in an approximate sense, is not something that makes humans special.

Numerical ability, then, is clearly present throughout the animal kingdom. So what is different about the numerical cognition of humans compared to that of other animals? As mentioned above, human infants and many animal species can be remarkably good at making greater/fewer judgments about relevant objects in the world. But monkeys, unlike people, do not approximate pi or build enormous monuments to dying leaders. It instead appears to be an extension of this evolutionarily-conserved approximate-numerical system that results in the mathematical knowledge of number found across much of human civilization.

What evidence exists to suggest number is something the brain treats in a special way? Those who have taken a cognitive psychology class are likely familiar with the general (though not universal) understanding that the fusiform gyrus, a strip of cortex across the rear underside of the brain is responsive to images that require fine visual expertise to discriminate. This part of the brain responds especially strongly when viewing an upright image of a face, but also when someone with expertise for identifying objects, such as classic cars or species of birds, views images of cars or birds, respectively. The brain is smart – why waste precious space and energy building a smattering of parts and pieces that all accomplish the same function on different kinds of inputs? Instead, mother nature appears to have found a way to allocate resources such that a single chunk takes care of the critical operation, and performs this task on different kinds of information received – in the previous example, discrimination of fine details of visual images, whether faces, birds, or cars.

Numerical cognition appears to work the same way. Early neuroimaging experiments, mostly using functional magnetic resonance imaging (fMRI), a technique whereby researchers can peek inside the skull and see where blood is flowing as a subject performs a task, found that a small area on both sides of the brain called the intraparietal sulcus (IPS) responded especially strongly when subjects performed tasks related to numbers. Whether the numbers were presented as dot patches, spoken or written words, or arabic numerals, the same part of the brain about 4 inches diagonally above and behind each ear responded robustly.7-9,5 This piece of cortex was responsive to number, regardless of how the subject received this information.

Even more interesting is a follow-up experiment in which a team of researchers posed several cognitively-relevant questions regarding an image which consisted of 2 different Arabic numeralswith different sizes, brightness, and numerical values. Each subject was asked to determine which numeral was bigger in size, brighter in color, and larger in value. Even though the questions were quite varied, the subjects’ brains responded the same way to all these kinds of judgments.10

But what do judgments of size, brightness and number have in common? It turns out that the brain may have found a way to represent and compare specifically the magnitude of a stimulus in an abstract fashion, without regard to which kind of magnitude is being compared. 9 The IPS, like the entire brain, sits inside the pitch-black attic of the skull, with its only source of information being the thousands and thousands of cellular wires carrying signals from other parts of the brain and the sensory organs. It has no idea whether the signals it receives are coming from the ears or the eyes, or whether the information encoded is about brightness, size, or number. In a sense, where the information is coming from doesn’t really matter – the same neural algorithm can discriminate any of these different examples of magnitude information.

Given that our sense of number may just be a particular implementation of a more abstract sense of “how big” in general, it could plausibly follow that individuals that have a more keenly-attuned magnitude comparison system, that is, subjects who are better at comparing the number of objects presented during a quick display, are more likely to succeed at grasping more advanced mathematical concepts. This indeed turns out to be the case – in a study of 64 high school students, those students who had performed better in math courses early in school performed better when asked to determine which of two dot fields contained more dots, a common test of the abstract number system.11 Though it is not clear whether these students had better abstract numerical abilities because they had better symbolic math training or if they instead were better at symbolic math as a function of their better abstract numerical abilities, this does tell us that symbolic math skill and abstract numerical abilities are correlated. Symbolic math is not a separable skill, but rather sits on top of the ancient numerical abilities present across the animal kingdom.

How, then, do we learn our more precise concept of mathematical number? Though children of very young ages can reliably discriminate between 2 numbers of the proportion 2:1, it takes longer before they are able to make more finegrained distinctions. Abstract numerical ability is early and innate – but an understanding of numerical quantities, such as “5,” “13,” and “42” rather than qualities such as “a few,” “some more,” and “a lot” takes time. In the Presidential Lecture at the 2009 Society for Neuroscience annual meeting in Chicago, IL, Elizabeth Spelke of Harvard University argued that explicit verbal counting, which requires persistent practice and which animals never acquire, is the “missing link” between the abstract numerical abilities of the animal kingdom and the precise mathematical skills found exclusively in humans. At first, Spelke says, human infants can only indirectly represent quantity – for example, an infant would have an idea that two balls were more than one ball, but would not know that two balls were two balls (they would be understood as “ball and ball”, not “two balls”). It is not until the operation of counting is learned, typically verbally, that the concept of natural number emerges. Once the understanding is in place that each succeeding number, with its own linguistic label, is one more than the previous number, the entire realm of natural numbers becomes available to the child. They have gained access to a secret known only to humans which allows for equally-precise representation of any quantity – from 4 marbles to 133 blocks.

Some cultures never acquire such understanding of countable natural number, much less more advanced symbolic mathematical concepts. Thus, they should only be able to make numerical decisions through the abstract number system. Stanislas Dehaene from INSERM in Paris, France set out to understand the way Amazonian tribes, who do not have words for numbers greater than 5, represent and make decisions about number. Each subject was asked to indicate where a number, as indicated by a quantity of physical objects presented to the subject, should fall on a number line. Performance suggested that number was represented in a logarithmic fashion, rather than the linear representation afforded by precise natural numbers.8,12 Performance of infants and animals trained to make numerical discriminations, along with recordings of single neurons in behaving monkeys and fMRI measurements in humans, also hint at a logarithmic representation of abstract number.5,13,14 Even without explicit numerical understanding, the brain can still make roughly accurate judgments of relative quantity, which are sufficient in most situations.

It thus appears that a relatively small adaptation allowed humans to implement an old but powerful quantity-evaluation system in a new important way, giving rise to feats like the pyramids of Giza. But the ancient Egyptians built the pyramids with none of the advanced technologies we know and cherish today; similarly, species around the world live and thrive without erecting edifices or calculating interest rates. Nevertheless, the primitive magnitude-estimation system is more than enough for survival among nonhuman species, and the ancient Egyptian construction techniques were more than suitable for building massive monuments. The numerical abilities we have now, despite their apparent uniqueness, may just be icing on the evolutionary cake.

Bibliography

1. Wilson, M.L., Hauser, M.D., & Wrangham, R.W., Does participation in intergroup conflict depend on numerical assessment, range location, or rank for wild chimpanzees? Animal Behaviour 61 (6), 1203-1216 (2001).
2. Wrangham, R.W., Evolution of coalitionary killing. Yearbook of Physical Anthropology 42, 1-30 (1999).
3. Agrillo, C., Dadda, M., Serena, G., Bisazza, A., & Chapouthier, G., Use of Number by Fish. PLoS ONE 4 (3), e4786 (2009).
4. Gross, H.J. et al., Number-based visual generalisation in the honeybee. PLoS ONE 4 (1), e4263 (2009).
5. Nieder, A., Counting on neurons: the neurobiology of numerical competence. Nat Rev Neurosci 6 (3), 177-190 (2005).
6. Thompson, R.F., Mayers, K.S., Robertson, R.T., & Patterson, C.J., Number coding in association cortex of the cat. Science 168 (3928), 271-273 (1970).
7. Cantlon, J., Platt, M., & Brannon, E., Beyond the number domain. Trends Cogn Sci (2009).
8. Dehaene, S., Dehaene-Lambertz, G., & Cohen, L., Abstract representations of numbers in the animal and human brain. Trends in Neurosciences 21 (8), 355-361 (1998).
9. Walsh, V., A theory of magnitude: common cortical metrics of time, space and quantity. Trends Cogn Sci (Regul Ed) 7 (11), 483-488 (2003).
10. Pinel, P., Piazza, M., Le Bihan, D., & Dehaene, S., Distributed and overlapping cerebral representations of number, size, and luminance during comparative judgments. Neuron 41 (6), 983-993 (2004).
11. Halberda, J., Mazzocco, M.M.M., & Feigenson, L., Individual differences in non-verbal number acuity correlate with maths achievement. Nature 455 (7213), 665-668 (2008).
12. Pica, P., Lemer, C., Izard, V., & Dehaene, S., Exact and approximate arithmetic in an Amazonian indigene group. Science 306 (5695), 499-503 (2004).
13. Nieder, A., Freedman, D.J., & Miller, E.K., Representation of the quantity of visual items in the primate prefrontal cortex. Science 297 (5587), 1708-1711 (2002).
14. Piazza, M., Izard, V., Pinel, P., Le Bihan, D., & Dehaene, S., Tuning curves for approximate numerosity in the human intraparietal sulcus. Neuron 44 (3), 547-555 (2004).
15. Dehaene, S., Izard, V., Spelke, E., & Pica, P., Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures. Science 320 (5880), 1217-1220 (2008).

Implementing a DNA Library to Explore the Sequence Dependence of Dimerization of BNIP3-like Transmembrane Domains

by: Kushagra Shrinath

For the past decade here at Rice, the MacKenzie lab has been studying transmembrane proteins (proteins that span a biological membrane) and the self-association of alpha-helices within these transmembrane proteins. Standard biophysical and biochemical assays are carried out on point mutants of important transmembrane proteins such as BNIP3 (a protein that promotes apoptosis), Syndecan (a receptor protein linked to various growth factors), and Glycophorin A (a protein that spans the membrane of a red blood cell). By assaying how different point mutants affect the dimerization of the transmembrane proteins, it is possible to deduce both the importance of the location of residues to dimerization, as well as how different residues at particular locations affect dimerization. Due to the critical role that transmembrane domains play in cellular signaling, figuring out motifs that cause dimerization is an important first step to help regulate abnormal cell signaling which can lead to metabolic and immunological disorders, as well as cancer. In this paper we will discuss transmembrane proteins, measures of studying them biochemically, and recent studies in the MacKenzie lab involving DNA libraries.

Transmembrane proteins are integral in the structure and function of cells. They constitute 27% of all proteins made by humans1 and serve a variety of functions in cell signaling and regulating proper ion concentrations. Although very important in biology, transmembrane proteins are not simple to study. Their helical structure, and moreover the helix-helix interactions caused by this structure, is a result of the amphiphilic (polar and non-polar) environment in which they reside. This amphiphilic environment makes biophysical and biochemical studies of transmembrane proteins difficult.5 Moreover, the accuracy of efforts to use detergent micelles to simulate the environment is unknown.

An assay called TOXCAT established by Russ and Engelman however, allows for in vivo measurements of an important facet of transmembrane domains: their ability to dimerize.6 By fusing ToxR, a transcription factor dependent on dimerization, to the transmembrane domain, one is able to detect dimerization when ToxR activates a reporter gene encoding for chloramphenicol acetyl transferase (CAT) via the cholera toxin promoter (ctx). (Fig. 1). Russ and Engelman used TOXCAT to measure the effect of single point mutations on Glycophorin A and found that mutations to polar residues show specificity in TOXCAT while they are disruptive in SDS.

The ability of a transmembrane domain to dimerize is largely dependent on the sequence of the interfacial residues that are active in dimerization.4 While point mutants are helpful in deducing how single residue changes affect dimerization, they are impractical to use when trying to uncover motifs and general patterns that cause tight dimers. The number of possible mutants that can be made is astronomically large, especially when considering double and triple mutants. Furthermore, the vast majority of these mutants are not strongly dimerizing and it would not be viable to use resources to carry out assays on them. What is needed is an assay that produces numerous mutants and also selects for mutants that are strongly dimerizing. It is here where a DNA library is a valuable tool to explore the sequence dependent dimerization of transmembrane proteins. Whereas mutagenesis alters a sequence at one position, a DNA library contains several degenerate nucleotides at key positions of interest in the transmembrane protein, thereby allowing it encode for many mutants while still maintaining the overall structure of the protein the same. Furthermore, single point mutations also ignore residues that work in tandems or combinations. Instead of focusing on residues at single locations, a library allows motifs of residues to be uncovered. By using a library it is also possible to define select residues at the interfacial region and allow for cut sites that eliminate certain residues altogether. This project aims to assess the general framework that is necessary for dimerization via the use of a DNA library. This library codes for human BNIP3 which has been integral in apoptosis2 as well as C. elegans BNIP3, Glycophorin A, and Syndecan 3.

Synthetic oligonucleotides that have the same backbone but encode for different chosen amino acids at the interfacial positions of the transmembrane domain will be amplified by PCR. They will be inserted into a pccKan vector in between the N-terminal ToxR transcription factor and the C-terminal Maltose Binding Protein. The ligated vector will be transformed into E. coli and the DNA harvested will represent the library from which future screening will take place. This DNA will then be transformed into NT326 cells which allow for transmembrane domains that dimerize strongly to be resistant to chloramphenicol. By growing up the NT326 cells on increasing concentrations of chloramphenicol and sequencing a number of cells that survive at various concentrations, one can deduce the residues and motifs that favor dimerization. Finally, the sequences of interest can be assayed via TOXCAT and their relative dimerization values can be compared. Through many screenings, a large enough library can be constructed that will allow for one to see the effect of an amino acid at a single position on the transmembrane domain. In addition, comprehensive motifs that tightly dimerize will be found.

One feature that has been seen by Russ et. al. is the predominance of a GxxxG motif in dimerization.7 In the absence of a GxxxG motif, a library implemented by Dawson et al. showed that the most tightly dimerizing sequences had polar serines and threonines which are thought to contribute to dimerization through hydrogen bonds.3 This library is distinctive in that it allows both polar and non-polar residues to lie at interfacial residues to assess whether or not hydrogen bonding is in fact responsible for the tight dimerization seen in polar residues. Furthermore, this library allows the central glycine to be eliminated to uncover other tightly dimerizing motifs that lack a GxxxG motif. Results of a preliminary study involving this DNA library have shown that on the most tightly dimerizing transmembrane protein, there is always a glycine present at the central position and a strong, statistically significant presence of phenylalanine 3 positions prior. Polar residues (serine and threonine) are prevalent at the first position of transmembrane domain, while the branched amino acids leucine and valine predominate at the sixth position. These findings provide insight to general motifs that arise as a result of induced variation and selection and give us the ability to predict the dimerization strengths of similar transmembrane sequences found in biological systems. Future work will include expanding the DNA library as well as creating a new library to ask different questions by changing the residues that can be at a particular position.

The use of a DNA library is an elegant method with which important questions can be answered in a holistic and encompassing manner without expending a lot of resources. Its application with transmembrane domains will potentially reveal important motifs that lead to strong dimerization. This information can be used to make tailor-designed drugs or novel mechanisms that can disrupt or enhance the dimerization of biological transmembrane proteins. Due to the large amount of biological mechanisms regulated by the self-association of alpha-helices, this information can be vital in designing a method to treat a myriad of disorders.

References

1. Almén, M.S., Nordström, K.J., Fredriksson, R., Schiöth, H.B. Mapping the human membrane proteome: a majority of the human membrane proteins can be classified according to function and evolutionary origin. BMC Biology 2009, 7:50.
2. Chen, G., Ray, R., Dubik, D., Shi, L., Cizeau, J., Bleackl, R.C., Saxena, S., Gietz, R.D., and Greenberg, A.H. The E1B 19K/Bcl-2-binding protein Nip3 is a dimeric mitochondrial protein that activates apoptosis. Journal of Experimental Medicine 1997, 186, 1975-1983.
3. Dawson, J.P., Weinger, D.S., Engelman, D.M. Motifs of serine and threonine can drive association of transmembrane helices. Journal of Molecular Biology 2002, 316, 799-805.
4. Lemmon ,M.A., Flanagan, J.M., Treutlein, H.R., Zhang, J, Engelman, D.M. Sequence specificity in the dimerization of transmembrane helices. Biochemistry 1992, Dec 29;31(51):12719-25.
5. MacKenzie, K.R. Folding and Stability of α-Helical Integral Membrane Proteins. Journal of Biological Chemistry 2006, 106, 1931-1977.
6. Russ, W.P., and Engelman, D.M. TOXCAT: A measure of transmembrane helix association in a biological membrane. Proceedings of the National Academy of Science 1999, 96, 863-868.
7. Russ, W.P, and Engelman, D.M. The GxxxG motif: a framework for transmembrane helix-helix association. Journal of Molecular Biology 2000, 296, 911-919.

Viral Origins of Merkel Cell Carcinoma

by: Harrison P. Nguyen, Peter L. Rady, Dr. Stephen K. Tyring

Abstract

Merkel Cell Carcinoma (MCC) is a highly aggressive skin malignancy hypothesized to affect Merkel Cells. Recently, the cancer was determined to have an infectious origin, a novel human DNA polyomavirus named Merkel Cell Polyomavirus (MCpyV). MCpyV expresses two non-structural proteins, the Large and Small T Antigens, which have been previously implicated in viral replication and survival. However, the Large T Antigen is found to harbor mutations prematurely truncating the MCpyV helicase and inhibiting its viral replication capability. Therefore, we propose the vital role of the Small T Antigen in the pathogenesis of the carcinoma. Here we review the hypothesized pathogenesis of the disease as well as the basic viral mechanism of the newly discovered virus.

Background

First discovered in 1875 by Friedrich Merkel, Merkel cells are present in the skin of all known vertebrates. Mice lacking the gene (Atoh1) necessary to produce Merkel cells fail to resolve fine spatial details,1 and thus, Merkel cells are hypothesized to function in the touch discrimination of shapes and textures. Recently, Merkel cells were shown to have an epidermal origin, rather than the previously hypothesized neural crest origin.2 In humans, Merkel cells are distributed along the basal zone of the epidermal, adnexal, and mucous membrane epithelium. Nevertheless, immunohistochemical analysis of Merkel cells shows staining for both epithelial and neuroendocrine markers. Merkel cells are oval-shaped, have an indented nucleus, and possess desmosomes that connect them with neighboring keratinocytes. In 1972, Cyril Toker first described an unusual, highly aggressive skin tumor with electron-dense neurosecretory granules. Since Merkel cells are the only cutaneous cells to form neurosecretory granules, the neuroendocrine carcinoma was attributed to the Merkel cells and subsequently named Merkel Cell Carcinoma (MCC). MCC is characterized by a painless, firm, red hemispherical tumor with a smooth, shiny surface that grows rapidly over a period of weeks to months.

Epidemiology

The American Cancer Society estimated 1,500 cases of MCC in the U.S. for 2008. This figure tripled the number of incidences reported twenty years ago, owing to improvements in methods of detection.3 Although the reported incidence remains ~65 times less than that of melanoma, MCC is twice as lethal; one in every three MCC patients will die from the malignancy.4 The mean age of the patients at time of diagnosis is ~70 years old.5 A strong link exists between MCC and ultraviolet light exposure. Regional incidence rates of MCC increase with sun exposure as measured by the UVB solar index.6 In particular, incidence of MCC is highest at equatorial latitudes. In one report, 81% of primary tumors occur on sun-exposed skin with 36% located on the face, the most sun-exposed anatomical site.7 Additionally, Caucasians have the greatest risk for developing the cancer. More notably, there is a clear association between MCC and immunosuppression. Chronically immunosuppressed individuals are 15 times morelikely to develop the malignancy than are age-matched controls. 4 For instance, MCC occurs more frequently in HIV and organ transplant patients (12/100,000/year) 5, and MCC is more lethal in immunosuppressed individuals with a mortality rate of up to 56%.7 Interestingly, there have been several documented cases of MCC regression following restoration of immune function. 8

Histology

Currently, the preliminary diagnosis is made on the basis of histopathology. The tumor has irregular margins, and its cells are arranged in strands or nests.9 Significant spacing between cells is typical, indicating a lack of cell-cell adhesion. As seen in many other cancers, the mitotic index in MCC tumors is very high, including many atypical mitoses.For definitive diagnoses, immunohistochemical analysis is usually required. Staining for cytokeratin 20 indicates a local aggregation of these filaments in a perinuclear dot-like pattern (Figure 1).

Pathogenesis

Currently, understanding of the molecular basis of MCC is still very limited. Because it is common in epithelial cancers, the well-described mitogen-activated kinase (MAPK) pathway has been extensively studied in MCC. As a result, it is clear how a mutation in this pathway can lead to transformation and especially immortalization. (MAPK traditionally plays a key role in many cell processes, including proliferation, suppression of apoptosis, migration, and differentiation;5 however, in several studies on MCC, little evidence has accumulated for a role of the MAPK pathway in tumorigenesis. Interestingly, traditional mutants in the MAPK pathway, such as the receptor tyrosine kinase c-kit and the ERK protein, which underlie other epithelial cancers are normal in MCC. 11,12 Although the MAPK pathway as a whole is generally inactive, a study demonstrated that inhibition of the farnesylation of the Ras protein, a particular component of the MAPK pathway, is sufficient to suppress MCC tumor growth in mice.13 The results of these studies taken together suggest the relevance of another Ras-regulated signal pathway involving the class 1 phosphoinositide 3 kinase (PI3K) and the Akt kinase. A downstream target of the PI3K/Akt pathway is the tumor suppressor p53. Induction of p53 accompanies apoptosis induced by Ras inhibition in MCC tumors.13 Although studies have indicated inconclusive evidence for the mutation of p53, the aforementioned finding that induction of p53 accompanies tumor suppression suggests that p53 expression and stability is related to MCC tumorigenesis, most likely through a downstream target of p53 in MCC progression. Protein Rb1 is a downstream target of p53 that functions as a key molecule in gene expression promoting the G1/S transition, and its demonstrated impact in virtually all cancers points to a significant role of Rb1 in MCC as well. In its hypophosphorylated state, Rb1 prevents cell cycle progression by inhibiting the E2F transcription factor. To enter S phase, cyclin-dependent kinases that are themselves indirectly regulated by p53 phosphorylate Rb and thus inactivate it, releasing E2F for cell cycle progression. Several mechanisms can allow Rb to mutate or become inactive, contributing to immortalization.5

Discovery of an infectious origin

With strong evidence supporting a correlation between immunosuppression and MCC, researchers long have hypothesized an infectious origin for the cancer. The mystery was partially unraveled in January 2008 when Feng et al. at the University of Pittsburgh identified a novel human DNA polyomavirus, which the authors named Merkel Cell Polyomavirus (MCpyV), in the full-genome sequencing of MCC samples. The authors developed a technique known as digital transcriptome subtraction (DTS), in which all mRNAs from a tumor are reverse transcribed into cDNA and then compared to the human genome. All human sequences are “subtracted,” and the remaining transcripts are assumed to be non-human sequences. Feng et al. conducted DTS with four MCC tumors, of which 99.4% of the sequences aligned closely with known human transcripts, and they established that one sequence common to all of the four tumors carried high sequence identity with the human BK polyomavirus antigen. Following extension and amplification, the authors identified the integration of a novel virus that has now been formally named MCpyV. To determine if MCpyV had a causal role, the researchers then screened ten MCC tumors for MCpyV using PCR. Seven were strongly positive and one was deemed weakly positive. Subsequent studies testing diverse populations have confirmed a ~80% presence rate of MCpyV DNA in MCC tumors, suggesting that MCpyV is a likely cause of MCC, although this has yet to be definitively confirmed.15, 16 Although all MCpyV tumors do not contain MCC, cancers that show presence of the virus are correlated with an increased potential for metastasis.17 Furthermore, based on Southern hybridization using a MCpyV DNA probe, Feng et al. determined that five of the eight MCpyV positive tumors contained monoclonal integration of the viral genome, suggesting that integration of the MCpyV genome occurs before metastases. In conjunction with the high frequency of MCpyV in MCC, this finding strongly indicates that MCpyV plays a causal role in tumorigenesis.

Polyomaviruses are small (40-50 nanometers in diameter), non-enveloped, circular, double-stranded DNA viruses. MCpyV is the sixth member of the Polyomaviridae genus that has been identified to infect humans. Polyomaviruses are highly species-specific and are thought to co-evolve with the organism that they infect. Their genomes are divided into three regions: early, late, and regulatory (Figure 2). The early region is expressed early in virus infection and continues during the late stage of infection. Importantly, the early region encodes non-structural proteins, namely the Large Tumor (T) Antigen and the Small Tumor (T) Antigen. The late region is expressed during and after genome replication and encodes structural proteins, in particular VP 1-3. The regulatory region contains transcriptional promoters and enhancers as well as the origin of replication. Because their genomes are rather basic, polyomaviruses rely on the host cell’s machinery for replication. The early-expressed genes bind to host proteins forcing the cell into S phase and thus facilitating viral replication. In previously discovered polyomaviruses, the Large T Antigen plays many diverse roles in viral replication. It is composed of three regions. Region 1 is essential in virion assembly, viral DNA replication, transcriptional control, and oncogenic transformation. Region 2 is important for host cell transformation as it contains an amino acid sequence that is important for binding the tumor suppressor pRB. Region 3 binds p53, most likely to promote cell growth and to facilitate host cell entry into S phase. In other polyomaviruses, the Large T Antigen plays an indispensable role in viral replication and survival. However, Shuda et al. identified MCC tumor-derived Large T Antigens to harbor mutations prematurely truncating the MCpyV LT helicase (22), which was confirmed by other publications (10) (Figure 3). The authors showed that with this truncation, the MCpyV Large T Antigen did not possess the capacity to replicate viral DNA but still preserved the ability to bind pRB. Thus, the MCpyV Large T Antigens is most likely still essential to the development of the carcinoma, but it does not contain the cell-lethal effects similar to the Large T Antigens of other polyomaviruses.8

Small T Antigen

With the interesting mutation and subsequent loss of function of the MCpyV Large T Antigen, questions arise as to how MCpyV can conquer host cells and replicate to infect more cells. Thus, the lab of Stephen Tyring at University of Texas at Houston Health Sciences Center proposes to focus on the other oncoprotein of interest, the Small T Antigen. In other polyomaviruses, the Small T Antigen plays a reduced, supplementary role to the Large T Antigen, but MCpyV could present a novel case where the Small T Antigen is the predominant pro-carcinogenetic component. The N-terminal sequence of the Small T Antigen is highly conserved among polyomaviruses, containing a heptapeptide region involved in cell replication. In addition, a J region (also present in the Large T Antigen) functions as a chaperone.18 However, the middle region of the Small T Antigen carries high variability, and consequently, its functionality in MCpyV is largely unknown. Most likely, it binds and inactivates protein phosphatase 2A (PP2A), an important biological enzyme for cells, and thus promotes S phase entry for the virus. In other polyomaviruses, Small T Antigen inhibition of PP2A is necessary for ST-mediated transformation to occur19(Figure 4).

PP2A refers to a family of serine-threonine phosphatases in eukaryotic cells. PP2A is composed of three subunits, each with several isoforms, allowing over 100 known PP2A complexes to exist. Particular PP2A complexes have characteristic substrates and functions within the cell, making it very difficult to establish the molecular mechanisms used by the Small T Antigen to induce transformation.20

The MCpyV Small T Antigen probably assumes a larger role in cell transformation and, in particular, replication, a process that was left unexplained given the mutation in the Large T Antigen. Thus, our lab sets out to elucidate the processes of the MCpyV Small T Antigen particularly binding partners and its behavior in a host cell environment. We already have shown that the Small T Antigen is highly conserved among MCC tumors containing the MCpyV genome, and we hope that information regarding the behavior of the protein will answer important questions regarding the pathogenesis of Merkel Cell Carcinoma.

The proposed work is a stepping stone in the struggle to understand and cure cancer. Although its prevalence is dwarfed by its melanoma and non-melanoma counterparts, Merkel Cell Carcinoma is rapidly growing in incidence rates and its 33% fatality rate presents a rather bleak outlook for the unfortunate individuals who are diagnosed with the disease. This research offers hope for understanding the mechanism by which the viral origin is able to transform and immortalize cell targets, subsequently leading to rapid metastases. The implications of this research are farreaching; knowledge into the mechanism of infection and viral survival is expected to shed light on questions relating to virology, Merkel cell function, and cancer in general.

Harrison Nguyen is a sophomore Cognitive Sciences major at Hanszen College. He is primarily interested in infectious carcinogenesis, and he works in the mucocutaneous laboratory at University of Texas Health Sciences Center under the guidance of Dr. Stephen Tyring. His laboratory is specifically studying the role of the Small Tumor Antigen in Merkel Cell Carcinogenesis. Harrison aspires to pursue his passion of medicine in the field of dermatological oncology as a research-physician.

References

  1. Maricich SM, Wellnitz SA, Nelson AM, Lesniak DR, Gerling GJ, Lumpkin EA, Zoghbi HY. 2009. Merkel Cells Are Essential for Light-Touch Responses. Science. 324: 1580 – 1582.
  2. Morrison KM, Miesegaes GR, Lumpkin EA, Maricich SM. 2009. Mammalian Merkel cells are descended from the epidermal lineage. Dev Biol. 336(1):76-83.
  3. Lemos B, Nghiem P. 2007. Merkel cell carcinoma: more deaths but still no pathway to blame. J. Invest. Dermatology. 127: 2100-2103.
  4. Heath M, Jaimes N, Lemos B, Mostaghimi A, Wang LC, Penas PF, Nghiem P. 2008. Clinical characteristics of Merkel cell carcinoma at diagnosis in 195 patients: the AEIOU features. J. Am. Acad. Dermatol. 58: 375-381.
  5. Becker JC, Schrama D, Houben R. 2008. Merkel cell carcinoma. Cell. Mol. Life Sci.
  6. Agelli M, Clegg LX. 2003. Epidemiology of primary Merkel cell carcinoma in the United States. J. Am. Acad. Dermatology. 49: 832-841.
  7. Penn I, First MR. 1999. Merkel’s cell carcinoma in organ recipients: report of 41 cases. Transplantation. 68: 1717-1721.
  8. Garneski KM, DeCaprio JA, Nghiem P. 2008a. Does a new polyomavirus contribute to Merkel cell carcinoma? Genome Biology. 9: 228.
  9. Plaza JA and Suster S. 2006.  The Toker tumor: spectrum of morphologic features in primary neuroendocrine carcinomas of the skin (Merkel Cell Carcinoma). Ann. Diagn. Pathol. 10:376-385.
  10. Sastre-Garau X, Peter M, Avril MF, Laude H, Couturier J, Rozenberg F, Almeida A, Boitier F, Carlotti A, Couturaud B, Dupin N. 2009. Merkel cell carcinoma of the skin: pathological and molecular evidence for a causative role of MCV in oncogenesis.  J. Pathol. 218(1):48-56
  11. Strong S, Shalders K, Carr R, Snead DR. 2004. KIT receptor (CD117) expression in Merkel Cell Carcinoma.  Br. J. Dermatol. 150: 384-385.
  12. Houben R, Michel B, Vetter-Kauczok CS, Pfohler C, Laetsch B, Wolter MD, Leonard JH, Trefzer U, Ugurel S, Schrama D, Becker JC. 2006. Absence of classical MAP kinase pathway signaling in Merkel cell carcinoma. J. Invest. Dermatol. 126: 1135-1142.
  13. Jansen B, Heere-Ress E, Schlagbauer-Wadl H, Halaschek-Weiner J, Waltering S, Moll I, Pehamberger H, Marciano D, Kloog Y, Wolff K. 1999. Farnesylthiosalicylic acid inhibits the growth of human Merkel cell carcinoma in SCID mice. J. Mol. Med. 77: 792-797.
  14. Feng H, Shuda M, Chang Y, Moore PS. 2008. Clonal Integration of a polyomavirus in human Merkel Cell Carcinoma. Science. 319: 1096-1100.
  15. Kassem A, Technau K, Kurz AK, Pantulu D, Loning M, Kayser G, Stickeler E, Weyers W, Diaz C, Werner M, Nashan D, zur Hausen A. 2009. Merkel cell polyomavirus sequences are frequently detected in nonmelanoma skin cancer of immunosuppressed patients. Int. J. Cancer. 000: 000-000.
  16. Duncavage EJ, Zehnbauer BA, Pfeifer JD. 2009. Prevalence of Merkel cell polyomavirus in Merkel cell carcinoma. Modern Pathology. 1-6.
  17. Garneski KM, Warcola AH, Feng Q, Kiviat NB, Leonard JH, Nghiem P. 2008. Merkel Cell Polyomavirus is More Frequently Preesent in North American than Australian Merkel Cell Carcinoma Tumors. Journal of Investigative Dermatology.
  18. Khalili K, Sariyer IK, Safak, M. 2008. Small Tumor Antigen of Polyomaviruses: Role in Viral Life Cycle and Cell Transformation.  J. Cell. Physiol. 215: 309-319.
  19. Hahn WC, Dessain SK, Brooks MW, King JE, Elenbaas B, Sabatini DM. 2002. Enumeration of the SV40 early region elements necessary for human cell transformation. Molecular and Cellular Biology. 22(7): 2111-2123.
  20. Sablina AA, Hahn WC. 2008. SV0 small T antigen and PP2A phosphatase in cell transformation. Cancer Metastasis Rev. 27: 137-146.
  21. Miller RW, Rabkin CS. 1999. Merkel Cell Carcinoma and Melanoma: Etiological Similarities and Differences. Cancer Epidemioology, Biomarkers & Prevention. 8: 153-158.
  22. Shuda M, Feng H, Kwun HJ, Rosen ST, Gjoerup O, Moore PS, Chang Y. 2008. T Antigen mutations are a human tumor-specific signature for Merkel cell polyomavirus. PNAS. 105: 16272-16277.