Rice University logo
 
 

Archive for the ‘Volume 1 (Spring 2008)’ Category

Engineers Without Borders: Nicaragua Health Clinic

by: Cindy Dinh, Baker ’11

Summer and winter breaks signal a migratory response in some students to leave their pencils and books behind for warmer climates. Rather than coast to the beaches and resorts, students from the Rice chapter of Engineers Without Borders (EWB) leave the hedges several times a year on humanitarian projects.

Formed in 2003, this student-driven group has designed and built sustainable projects in underdeveloped countries. The inaugural project was a water distribution system in El Salvador and projects in nearby countries have since been implemented, including water sanitation in Mexico and a footbridge in Nicaragua.

After two years of planning, designing and building, Rice EWB can add an environmentally- friendly health clinic to their growing list of accomplishments. In a series of four trips, EWB was able to establish ties with organizations in the region, build the structure and finalize plans to open the clinic.

In Bernadino Diaz Ochoa, Nicaragua, the subsistent farming community has made ends meet despite the lack of electricity or access to clean water. With the nearest health clinic a two hour drive from the town, most members of this community receive little or no health care at all. In 2006, a Rice EWB team (Nicaragua II) met with local community members to ascertain the community’s greatest need. They soon determined that a health clinic would benefit the community of Bernadino and four neighboring communities, which would serve over 600 people.

After observing how a previous EWB water filtering project impacted people’s lives, Weiss senior and EWB project coordinator Tim Burke was motivated to lead a group of six students to finalize the construction of the health clinic in December 2007. “A clinic is a big undertaking and I wanted to see this project through to the end,” Burke said. In previous trips to Bernadino Diaz Ochoa, EWB has implemented a solar lighting system for a local church and three different water purification systems. Families are equipped with clay Potters for Peace filters which cost $15 each and provides a viable way to receive clean water.

EWB students designed the clinic to cater to the community’s needs while using available resources. The students conferred with civil engineering professors at Rice, architects at NASA and professional engineers during the design process. Driven by environmental concerns, the team was hesitant to use cinderblocks in the building since it would be costly and induce poor ventilation. Instead, the clinic was built with strawbales, stucco, local wood and adobe. Using local materials makes it easier to maintain and substantially lowers the cost of the project.

“Because of the large scale flooding in the area, we can use agriculturally renewable products like straw from the fields to rebuild walls in case it rains,” Burke said.

At the 2007 EWB-USA International Conference, the Rice Nicaragua II team’s work designing and building a green health clinic was recognized by an award for “Most Appropriate Technology”. The design plans from the health clinic and the project on biosand filters in El Salvador were jointly presented at the EWB International conference in March 2008, featuring “Sustainable Engineering and Global Health.”

Forming relationships with the community and other organizations is critical in organizing any project. The team usually has one or two interpreters who are fluent in Spanish. Viqui Arbizu-Sabater, Lecturer at the Center for the Study of Languages at Rice University, joined a group of students in May 2007 after learning about EWB’s projects from one of Burke’s writing assignments. “I liked the motivation to help humanity. You don’t need to be an engineer to be able to help,” Arbizu-Sabater said. Prior to the trip she helped students develop their Spanish conversational skills by practicing with them outside of class and translated documents from English to Spanish.

Community partnership is key in sustaining the project after the team leaves. “Some organizations come to a country and build a project and then just leave. There’s no community involvement in the project and no community ownership. Our projects are designed so communities can take ownership of the project after we leave,” Baker sophomore Matt Wesley said.

While all of the projects are designed and planned by students, the local community participates in the actual construction of the project. Partnerships with other non-government organizations in the area will furnish and provide the necessary backbone of the clinic, including trained staff and a visiting doctor. Rice EWB also educates the community on how to use, maintain and fix the new building.

“One of the most successful parts was when we talked to the Ministry of Health. They were able to donate a hospital bed, basic medical supplies and have a doctor come visit the clinic every 15 days,” Burke said. “Nicaragua offers universal free health care and the government wanted to emphasize rural communities.”

Wesley, who went on two of the Nicaraguan trips, will coordinate a new project in Nicaragua with Baker sophomore Amy Liu. Now that the clinic project is finished, the team has visited multiple communities to assess their needs. “We’re not looking for the community that’s the poorest. We’re also looking for communities that are organized and will be willing to work alongside us,” Wesley said.

As part of the assessment in Pueblo Nuevo, a village two kilometers north of the Costa Rican border, the EWB team conducted a health survey of the community, gathered water and soil samples and surveyed the topographical region in preparation of a second trip in May 2008. The preliminary plan is to provide Pubelo Nuevo with a central source of water. Since the village is spread out amongst a hill and a valley, EWB will consider the installation of an electric pump to send water to a storage tank on top of the hill and have it gravity fed down to the village.

While these projects require extensive design and research, it requires the effort of a diverse group of students. Wesley is a pre-med biology major and is one of the non-engineering members of the team. “You just have to show dedication to the team and be willing to learn during the meetings,” he said. “There’s a big humanitarian aspect to these trips.”

For more information please visit: http://ewb.rice.edu

The Evolution of Intoxication

by: Zeno Yeates, Sid Richardson ’10

Consider the relationship between the bee and the rose. One may wonder about the nature of this relationship and the manner in which it developed. Although conventional knowledge holds that the bee adapted to the pollination patterns of the rose, some consider the reciprocal situation equally possible; that the rose adapted its pollination sequence to the bee in order to better disseminate its seed. Michael Pollan, a contributing writer for the New York Times, among other venues, argues for the latter in his national bestseller, The Botany of Desire. Pollan investigates the nature of the co-evolutionary relationship between humans and plants, making the paradoxical argument that domesticated vegetation evolved to meet our desires. Pollan explores four different human desires, featuring one specific plant as an example of each. He claims that the apple panders to the human desire for sweetness, that the tulip satisfies the desire for beauty, that the potato fulfills our desire for control, and that marijuana indulges our desire for intoxication.
Although Pollan investigates each of these relationships, he seems most concerned with the human desire for intoxication and its consequent fulfillment by the cannabis plant family. Pollan speaks in great depth about the matter in a speech delivered at the Avenil lecture series at UC Berkeley in 2007, entitled “Cannabis, Forgetting, and the Botany of Desire.” Here, he argues that the marijuana plant evolved to gratify the human longing for intoxication, and that this represents an important co-evolutionary event between man and plant. Tetrahydrocannabinol, or THC, is the active compound in marijuana that induces its psychoactive effects of euphoria and repose. Pollan notes that the production of the THC molecule is very energetically-requisite, but that many other plants demonstrate similar morphological traits. For example, a plant’s production of exotic colors and smells effectively compensate for its lack of mobility. In an example of vegetative ingenuity, Pollan describes the surprising counteraction of lima beans to the herbivory of the two-spotted spider mite. As the spider-mite begins to feast on the lima bean, the bean plant itself releases a volatile chemical whose scent attracts predatory insects that in turn feed on the spider-mites. For this reason, Pollan affectionately refers to plants as “nature’s alchemists.” Pollan notes that “Cannabis works on our minds in order to borrow our legs,” additionally arguing that plants are as advanced as humans, or even more so, if judged by the benchmark of organic chemistry. Pollan proposes further evidence, citing that the genome of a roundworm has 20,000 genes, while the genome of a human contains 35,000 genes, yet the genome of simple rice contains 50,000.
Returning to the theme of intoxication, Pollan explores plants that generate chemicals to “alter the texture of human consciousness.” Nearly all cultures have exploited this aspect of plants, with coffee, nicotine, and tea offered as some of the more conventional examples. Pollan references the cognitive scientist Steven Pinker for an evolutionary explanation of the production of such complex and energy-intensive compounds. To explain such large allocation of nutrients and energy for the production of these seemingly worthless biochemical compounds, one must consider the established relationship between man and plant. Pollan insinuates that in exchange for their agricultural propagation, plants exploited certain adaptive traits important to the human species. One such trait is reward-seeking, in which the brain induces a biochemical response that engenders the feeling of motivational gratification for achieving some goal or completing some task. In essence, these plants are tapping into the human biochemical reward system.
Pollan goes on to explain that humans came to realize the mind-altering potential of plants through observation of animals. He relates this supposition to the Arabian mythos that explains how humankind came to drink coffee. According to legend, goat herdsmen supposedly noticed that whenever their drove partook in the red berries hanging from a particular bush, they became unusually energetic. Observation led to action, and an entire culture progressively developed around the concept of consuming the derivatives of this plant. Reasons for the development of this coffee culture originate from the utility of caffeine as a survival tool. Ancient man potentially conferred a significant advantage from its consumption, with specific examples including heightened mental awareness that might have endowed powers of endurance before the hunt. Yet, in the same manner in which such drugs create “doors of perception”, they also serve to inhibit normal bodily function, thereby diminishing an individual’s likelihood of survival. That the ancient Greeks had only one word that was used for both medicine and poison, betrays the long-understood dichotomous nature of such drugs. In this regard, Pollan actually argues for the benefit of more degenerative drugs, including alcohol or marijuana. Such drugs, he argues, could also have benefited human societies by promoting social relaxation and relief from existential malaise.

After its potential came to be understood, domestication of the marijuana plant itself followed two different pathways, one as psychoactive drug (cannabis indica Lam.), and the other as hemp (cannabis sativa L.). At present, following 1500 years of domestication, the two subspecies of the same genus have become so distinct that appreciable amounts of THC are no longer produced in cultivated hemp. Effectively, wild marijuana no longer exists. In terms of the psychoactive forms of marijuana, there was domestic selection for strains with greater potency in their effects as analgesic, anti-inflammatory, or anti-anxiety. Continued development led the artificially-selected species to become progressively more psychoactive. Allyn Howlett, a researcher at St. Louis University, remarked that marijuana and THC represent “everything that Adam and Eve would want after leaving the Garden of Eden.” In other words, cannabis allows mankind to deal with the potentially great suffering that is the human condition.

Following from this, one is compelled to conjecture about the reasons a plant would come to generate this highly-complex molecule in the first place. The THC molecule confers special defense mechanisms to the plant, including protection against both predatory insects and ultra-violet radiation. However, Pollan also explores a more subtle reasoning for the production of this molecule, which is that it is more beneficial not to flat out kill one’s enemies, because then their population will selectively develop a resistance against that particular toxin. Instead, it is of greater benefit to have one’s predators forget. Thus, it is the altered mental state induced by the consumption of marijuana that may have been the plant’s initial wildcard. In yet another cultural mythos, Pollan explains that humans supposedly first became interested in cannabis upon observing that pigeons in central Asia become disoriented after the consumption of this substance, and that it is purely accidental that THC is also psychoactive in the human brain.
Inverting conventional logic, Pollan posits that forgetting is not a defect of mental operation, but rather an important component thereof. Neurologically, THC acts on the cannabinoid system of the brain, mimicking anandamide (arachidonoylethanolamide), which functions to erase memories from the brain. Such cannabinoids allow humans to forget moments of strife and pain. It must also be noted that forgetting is not counter to learning. Referencing the American psychologist and philosopher, William James, “not losing memories precludes the selective memorization of facts.”
In sum, Pollan investigates the possibility that plants manipulate their consumers through inducing intoxication in a co-evolutionary escapade, in which alteration of consciousness is exchanged for domestication. Complex psychoactive molecules such as THC were initially produced as chemical defenses to ward off herbivorous predators. Nevertheless, by happenstance, they also maintained a bio-neurological effect on the brains of animals as well. These effects were then observed by humans tending to livestock and consequently became indoctrinated into human society. Although marijuana-use impedes normal mental function, summoning a negative light from the survivalist viewpoint, it also serves to enhance other, potentially beneficial, bodily functions. TCH mimics a biochemical compound active in the cannabinoid system, which initiates the dissolution of memories, a function important to the memorization process. It also promotes relaxation important to the amelioration of social tensions. As a result, humans have embraced this uncanny plant and sparked its proliferation across the globe. Bold and original, Pollan’s book compels us to question who selected who in the evolutionary madness. Hence, The Botany of Desire attempts to inspire awe at the sheer complexity and ingenuity of the plant world.

‘Nanorust’ and Clean Water

by: Eui Whan Moon, Baker ’11

Looking back on the years I have lived in the central Asian country of Kyrgyzstan, one of my lasting memories takes place in the small, crowded kitchen of our home. Upon our first arrival to the country, my parents and I were strictly warned by other Korean expatriates that it was unsafe to consume tap water that had not been processed. Consequently, my family promptly learned and adopted the purification process. The task is simple, consisting of boiling, cooling, decanting, and filtration (aided by our faithful Brita® jug). Since drinking water is an every day necessity, this ritual has run its course in our kitchen on an endless loop for fourteen years to this very day. If I was bored, I could always count on finding a ten-gallon pot of cooling water on the kitchen stove, waiting to be filtered. All this to say, water is a daily essential, and clean water even more so. Yet many regions of the world do not have this luxury; in effect, countless lives are claimed each year by the harmful pollutants present in the drinking water. However, a recently discovered property of iron oxides may hold a promising application to alleviating the problem of water contamination around the globe.

Here in the United States, every municipal water system is accountable to Environmental Protection Agency (EPA) regulations to provide safe drinking water to each home.1 Nonetheless, there are many developing countries and cities that cannot afford such a system, and thus to pour a cup of tap water is perilous for their inhabitants. In a number of Asian countries groundwater — a key resource for rural communities — is contaminated with dangerous levels of harmful microorganisms, inorganic chemicals, and organic chemicals. A giant among these various contaminants is the inorganic chemical, arsenic.2 Arsenic – a colorless, odorless, and tasteless element – causes various health defects upon ingestions, including skin damage, failure of the circulatory system, and cancer.1 On March 2005, The World Bank and Water and Sanitation Program presented a report on their comprehensive study of groundwater in Asian countries. The study revealed that parts of Bangladesh, China, India, Vietnam, Nepal, and Myanmar were just a few of the numerous hotspots for arsenic contamination. Overall, an estimated sixty-five million people were subject to health risks due to critical levels of arsenic in water.2 “Crisis” understates the potentially deadly arsenic situation at hand in Asia, as well as many other parts of the world not mentioned. Furthermore, Asia is only one of many other regions facing this problem. The influence of arsenic contamination is so lethal and widespread that the word “crisis” understates the situation at hand.

Meanwhile in the western hemisphere, a handful of scientists at Rice University’s Center for Biological and Environmental Nanotechnology (CBEN) are studying a promising remedy for arsenic contamination. The key to this solution was the discovery of strange magnetic properties among nano-scale magnetite particles. Magnetite (Fe3O4) is an iron oxide, much like rust, so the term “nanorust” was coined for magnetite nano-particles. Whereas rust (FeO) only contains iron in +2 oxidation states, Fe3O4 has iron in both +2 and +3 states. Nanorust crystals are so tiny that they are measured in the scale of nanometers (10-9 meters). At this size, magnetite was found to behave differently under the influence of a magnetic field in comparison to its counterpart with more conventional dimensions. For instance, based on observations made on bulk material it would take an extraordinarily large magnetic field to extract magnetite nanoparticles suspended in a solution. Yet Dr. Vicki Colvin, the director of CBEN, and her colleagues discovered to their surprise that removing nanorust particles from a solution required only a small electric field. Dr. Colvin told Chemical and Engineering News, “We were surprised to find that we didn’t need large elecromagnets to move our nanoparticles, and in some cases, handheld magnets could do the trick.”3 Another property of nanorust is that it has a very high surface area per mass because the particles are so small. The principle here is simple. Picture a very large copper sphere and a very small copper sphere. Most of the atoms in the large sphere are inside the sphere, not on its surface; the opposite is true for the small sphere. Therefore, comparing a one kilogram copper sphere to a kilogram of nano-sized spheres will show that the latter mass has much more surface area. In the case of nanorust particles, a kilogram has enough surface area to cover an entire football field.4

So how exactly do the properties of nano-scale magnetite help solve the arsenic problem? Arsenic has a high affinity toward iron oxides. Regardless, the use of conventional-sized iron oxides for purifying water has largely proven to be impractical, inefficient, and tedious.4 On the other hand, using nanocrystals of iron oxide for the job is an entirely different matter. Due to its exceptional surface area (which translates to more binding spots for arsenic) a given mass of magnetite particles twelve nanometers in diameter can capture one hundred times more arsenic than the equal mass of the larger iron oxide counterparts used in filters today.5 Once all the arsenic has been collected by the magnetite, these nanoparticles are easily removed from the water using a simple hand magnet. In describing this process to Science Daily, Dr. Colvin claimed, “Arsenic contamination in drinking water is a global problem and while there are ways to remove arsenic, they require extensive hardware and high-pressure pumps that run on electricity . . . . Our approach is simple and requires no electricity.”6 The only problem is the cost; nanorust particles assembled from pure laboratory chemicals can be very expensive. Key ingredients for water soluble nanorust are rust (FeO) and a fatty acid mixture (oleic acid). Heating rust yields magnetite. A double layer coating of oleic acid is then applied to each magnetite nanoparticle; by doing so, the nanoparticles will not stick to one another but instead be dispersed throughout water. Cafer Yuvez, a graduate student working under Dr. Colvin, is developing a method to create nanorust particles using inexpensive household items such as rust, olive oil (source of fatty acid), drain opener, and vinegar. Once perfected, this method will drastically reduce the production cost of nanorust from $2,624 to $21.5 per kilogram.7 Perhaps one day millions of people threatened by arsenic will be saved by nanorust cooked up on their kitchen stoves.

References

1. Drinking water Contaminants. http://www.epa.gov/safewater/contaminants/index.html (accessed 03/20/08), part of United States Environmental Protection Agency.
2. Arsenic Contamination in Asia. http://siteresources.worldbank.org/INTSAREGTOPWATRES/Resources/ARSENIC_BRIEF.pdf (accessed 03/20/08), part of a World Bank and Water and Sanitation program Report.
3. Cleaning Water With ‘Nanorust’. http://pubs.acs.org/cen/news/84/i46/8446notw4.html (accessed 03/20/08), part of Chemical and Engineering News.
4. Merali, Zeeya. Cooking up ‘Nanorust’ Could Purify Water. http://technology.newscientist.com/article.ns?id=dn10496&print=true (accessed 03/20/08), part of New Scientist Tech.
5. Feder, Barnaby J. Rustlike Crystals Found to Cleanse Water of Arsenic Cheaply. http://www.nytimes.com/2006/11/10/science/10rust.html (accessed 03/20/08), part of The New York Times.

The Embryonic Stem Cell Controversy

by: Sergio Jaramillo, Sid Richardson ’10

Most innovative scientific ideas initially face vehement opposition, followed by a gradual process of testing and evolving into a universally accepted dogma. Religion and politics — and their delineation of ethics and morality — have historically played a large role in the scientific process, and in bioethics today, they continue to assume a large role in morally controversial topics, one of the most prominent of which is embryonic stem cell research. The current Embryonic Stem Cell (ESC) revolution was ignited by Dr. James A. Thompson at the University of Wisconsin in 1998. The call to explore stem cells’ potential to regenerate tissues and more effectively treat Parkinson’s, Alzheimer’s, among many other diseases, has sparked a great deal of political and ethical controversy. Due to the debate over the ethics of using fertilized embryos in stem cell research, this issue has also led to groundbreaking findings on the possibility of using other kinds of cells to derive the same benefits.

The isolation of the Human ESC was a great scientific achievement with an even greater therapeutic potential. An embryonic stem cell is defined as a stem cell derived from the inner cell mass at the blastocyst stage of a fertilized oocyte. The cells are considered pluripotent, because they posses the capacity to divide and produce cells derived from the three germ layers (ectoderm, endoderm, and mesoderm). However, ESCs are limited in their inability to differentiate into the trophoblasts, which give rise to the placenta. Scientists believe that one day damaged tissue will be regenerated by those “shimmering spheres of human potential” (National Geographic, July 2006) to spark a renaissance of life in damaged tissue.1 However, there are risks: if ESCs are left undifferentiated in the body, they can differentiate uncontrollably, causing a Teratoma, a benign tumor. On the other hand, ESCs may offer a cure for Parkinson’s, Alzheimer’s, Diabetes, cancer, Hemophilia, and many other diseases.

In light of the fact that ESCs have been the center of much controversy, useful insight may be gained through studying the history of religions and secular debate on the beginning of life. In scientific terms, an embryo is defined as the stage when the dividing cells in the recently-fertilized egg gain control of their cellular machinery by beginning to produce their own enzymes; this stage occurs in the cleaving cell one to two days after conception. The current Roman Catholic Church’s stand on the human status of the blastocyst is that “the ablation of the inner cell mass (ICM) of the blastocyst, which critically and irredeemably damages the human embryo, curtailing its development, is gravely immoral and consequently is gravely illicit”.2 However, this has not always been the stand of the Roman Catholic Church; until the twelfth century it believed in Saint Augustine’s doctrine of “the quickening” that stated the embryo acquires humanhood through the acquisition of sentience. From the twelfth century to 1869, the Church believed in Saint Thomas Aquinas’s doctrine of “delayed hominization,” which stated that of the vegetative, animal, and rational stages, the last stage must be reached for embryo to fully attain humanhood. In 1869, Pope Pius IX decreed the beginning of life at the moment of conception due to knowledge that fertilization involved sperm and eggs. Thus, for nearly 2,000 years The Church had accepted the doctrine of “late humanhood” in one form or another, and the new cannon has been in place for about 150 years. Just as the Roman Catholic Church has not been a picture of unwavering conviction on the status of life and its beginning, neither have other faiths such as Islam, Judaism, and protestant branches of Christianity in which religious and secular scholars are split on this issue.

Some regard ESC research as tantamount to abortion, claiming that the blastocyst is being destroyed. According to Anne Kiessling, a stem cell researcher, the irony of the situation is that when opponents of research on fertilized eggs and early embryonic development try to stop such research, they actually inhibit the scientific understanding of the process. This in turn impedes the development of new ways to prevent pregnancy, perpetuating the need for abortion itself.2 Many proponents of ESC research find that conferring humanhood to the original blastocyst is problematic because until it has reached the morula stage (14th day blastocyst), the blastocyst has the potential to split, forming twins. On this basis, they reason that it is at least counterintuitive for a person to split in two, so one cannot confer humanhood until the potential for a unique human is actually manifested in the embryo through a propensity to acquire a unique biological personality. Whether or not ESC research implies terminating life, one thing is certain: there are hundreds of thousands of frozen blastocysts in fertility clinics around the world destined to be destroyed.

Because of the ethical controversies regarding the use of fertilized embryos for the isolation of embryonic stem cells, scientists looked for a way to turn a somatic (or body) cell into a human ESC. In theory, all somatic cells in the body are the same because they contain the same number of chromosomes. The amount of gene expression guides the differentiation of cells along commitment pathways to their lineages: from embryonic to adult stem cells and then to somatic cells. In other words, genes are upregulated and downregulated in different sequences, with varying levels yielding a cell type. Therefore, it is theoretically possible to take a fibroblast (skin) cell and turn it into a pluripotent cell by inducing the expression of the necessary factors that match with the expression profile of ESC. The induction of pluripotency on fibroblasts was first performed in mice by two different research groups, creating high expectations for the possibility of reproducing the work with human fibroblasts. Then, on November 11, 2007, the research journals Cell and Science each published an article on the induction of pluripotency on human fibroblasts by two different independent research groups. This news made the headlines across the world as a monumental achievement in ESC research. Some scientists in the embryonic stem cell research community insist that this breakthrough is far from a replacement to embryonic stem cell research, and that the methods to induce pluripotency are problematic for therapeutic application in human beings.

Despite the controversies, it is indisputable that there lies a great potential in stem cell research to cure a myriad of diseases. The new research on fibroblasts is part of a strong social and scientific movement to make such treatments possible, and perhaps it marks a beginning of the end to the ethical controversy.

References

1. Rick Weiss. Stem Cells, The Power to Divide. National Geographic Magazine. July 2005.
2. Kiessling, A., Anderson, S.C. Human Embryonic Stem Cells (second edition). Jones & Bartlett: October 31, 2006.

Exploring Carbon Nanotubes

by: Varun Rajan, Brown ’09

In a list of the most important materials in nanotechnology today, carbon nanotubes are ranked near the top.1 Consisting solely of carbon atoms linked in a hexagonal pattern, these cylindrical molecules are far longer than they are wide, similar to rods or ropes. The prefix nano- means one billionth, which refers to a nanotube’s diameter of a few nanometers. In part because of their unique size, shape, and structure, carbon nanotubes (CNTs) are exceedingly versatile. Proposed areas of application for CNTs range from electronics and semiconductors, to molecular-level microscopes and sensors, to hydrogen storage and batteries.2 However, CNTs’ special combination of strength, low density, and ductility has also led to speculation about their role as “superstrong materials”3 in structural applications, such as a “space elevator”4. Before these science fiction claims become engineering feats, basic questions about carbon nanotubes’ mechanical behavior must be answered. In his research over the past twelve years, Dr. Boris Yakobson, a Rice Professor in Materials Science, has tackled several fundamental questions concerning the failure of nanotubes and the behavior of dislocations. The materials science term “dislocation” refers to a line imperfection or defect in the arrangement of atoms in a CNT; dislocations are important because they affect a material’s mechanical properties.5

How do nanotubes fail?

Determining how carbon nanotubes fail, or lose their capacity to support loads, is a complicated yet important matter; it must be fully understood before nanotubes are used in structural applications. In the article “Assessing Carbon Nanotube Strength” Yakobson, along with then-postdoctoral student Traian Dumitrica and graduate student Ming Hua, used computer simulation to model CNTs and investigate their failure.

According to Yakobson, simulations are valuable because “in principle you have full access to the details of the structure.” He added that one of the advantages of simulations is that the researcher has full control over the experimental conditions and variables. With respect to carbon nanotube failure, some of the most pertinent variables include the level and duration of the applied load, as well as the nanotube’s temperature, diameter, and chiral angle – the angle ranging from 0 to 30 degrees that describes how a carbon nanotube is rolled up from a graphite sheet. In addition to affecting the nanotube’s strain (stretch) at failure, these variables also determine the process by which it breaks. Yakobson found that two different mechanisms can cause nanotube failure. At low temperatures, the mechanical failure dominates as the bonds between adjacent carbon atoms literally snap. On the other hand, high temperatures induce the bonds within the nanotube’s carbon hexagons to flip, causing the hexagons to become five- and seven-sided figures. This effect weakens the nanotube structure and initiates a sequence of processes that culminate in complete nanotube failure. Combining the results of numerical simulations and analytical techniques, Yakobson constructed a carbon nanotube strength map: a single figure that illustrates the relationship between the relevant variables, the failure mechanisms, and the failure strain (figure 2). The significance of Yakobson’s research led to its publication as the cover article for the April 18, 2006 issue of Proceedings of the National Academy of Sciences.

How do dislocations behave in carbon nanotubes?

Another area covered by Yakobson’s work was the study of dislocation behaviors. While dislocation dynamics in multiwalled carbon nanotubes might seem to be a subject only a materials scientist could love, this area of research has great bearing on CNT use in mechanical and electronic applications. Multiwalled carbon nanotubes (MWCNTs) can be visualized as many single-walled carbon nanotubes, arranged concentrically like tree-trunk rings and interacting with each other via weak intermolecular forces.6,7 Although somewhat difficult to visualize, dislocations — defects in the atomic structure of a CNT —can be viewed as objects that can move, climb, and collide with one another, leading to the term “dislocation dynamics.”
In his research on this topic, Yakobson collaborated with J.Y. Huang at Sandia National Laboratory and F. Ding, a research scientist in Yakobson’s group. Their experimental procedure involves heating a MWCNT to approximately 2000° C, which causes its dislocations to mobilize. Using a transmission electron microscope, they tracked the motion and interaction of these dislocations over time. Yakobson said that this powerful microscope gives the experimenters “nearly atomic resolution.” A resolving power of this magnitude creates spectacular images that reveal a rather odd phenomenon: one can observe a dislocation climbing a carbon nanotube wall and combining with a dislocation on an adjacent wall to form a larger dislocation loop, which then continues to climb (figure 3). If this process is repeated throughout the MWCNT, its entire structure becomes a mixture of ‘nanocracks’ and kinks. More importantly, adjacent walls are cross-linked together by covalent bonds, whereas formerly they were only weakly connected by van der Waals forces. Cross-linking is important because it “lock[s] the walls together in one entity,” Yakobson said. As a result, there is an increased possibility for transfer between walls, and current can be driven through the cross-linked junction. He also believes that cross-linking is somehow responsible for the mechanical strength in MWCNTs, because the concentric cylinders can no longer easily slide past one another.
Yakobson’s research is simultaneously old and new. It is old because subjects such as dislocation dynamics and material failure are well-understood for many materials. Yet, it is also ingenuous because knowledge in these fields cannot be extended easily to carbon nanotubes.8 Researchers in this field are treading on unexplored ground that will bring the nanotube a step closer toward its applications.

References

1.Arnall, A.H. Future Technologies, Today’s Choices: Nanotechnology, Artificial Intelligence and Robotics; A Technical, Political and Institutional Map of Emerging Technologies, Greenpeace Environmental Trust, London, 2003.
2.Collins, P.; Avouris, P. Scientific American 2000, 62-69.
3.Chae, H.; Kumar, S. Science 2008, 319, 908-909.
4.University of Delaware. Space Tourism To Rocket In This Century, Researchers Predict. http://www.sciencedaily.com/releases/2008/02/080222095432.htm (accessed 02/27/08), part of Science Daily. http://www.sciencedaily.com/ (accessed 02/27/08).
5.Dumitrica, T.; Hua, M.; Yakobson, B. Proc. Nat. Aca. Sci. 2006, 103, 6105-6109.
6.Cumings, J.; Zettl, A. Science 2000, 289, 602-604.
7.Baughman, R.,; Zakhidov, A.; de Heer, W. Science 2002, 297, 787-792.
8.Huang, J.Y.; Ding, F.; Yakobson, B. Physical Review Letters 2008, 100, 035503.

The Promise of Adult Neurogenesis

by: Elianne Ortiz, Hanszen ’11

Contrary to popular belief, the number of neurons in the human body is not fixed at birth. Through a process called neurogenesis, stem cells continue to differentiate into neurons throughout adulthood at specific regions of the brain — namely the olfactory bulb and hippocampus. The olfactory bulb is responsible for smell, while the hippocampus plays a role in long-term memory. Neurons in the hippocampus proliferate with enough mental and physical exercise, but their purpose had long remained unknown. A recent study by a team of investigators at the Salk Institute in La Jolla, California, finally shows some promise of shedding light on this mystery. They created a method to genetically engineer mice to turn off the processes that are responsible for neurogenesis.

In an earlier study, researchers Ronald M. Evans, Ph.D., and Fred H. Gage, Ph.D., had previously discovered a crucial mechanism that kept adult neuronal stem cells in an undifferentiated, proliferative state.3,4 After learning more about its specific function, Dr. Chun-Li Zhang, postdoctoral fellow at the Salk Institute, was able to turn off this mechanism in mice. This procedure effectively suppressed neurogenesis in the hippocampus, allowing the scientists to identify how newborn neurons affect brain functions.

The altered mice were then put through a series of behavioral and cognitive tests, one of which yielded results that conflicted with those of the control population. The Morris water maze is used to study the formation of learning strategies and spatial memories. Mice placed in deep water try to find a submerged platform with the help of cues marked along the walls of the pool. As the test was repeated, a normal mouse remembered the cues in order to locate the platform with relative ease. On the contrary, the mice that were genetically engineered to lack neurogenesis showed slower improvement. These mice experienced significant difficulty in finding the submerged platform, and their performance declined as the task was made more demanding. Although these mice were slower at forming efficient strategies, their behavior was very similar to that of the control mice by the end of the experiment. “It’s not that they didn’t learn, they were just slower at learning the task and didn’t retain as much as their normal counterparts,” Zhang said in an interview with Science Daily.1

This study suggests that neurogenesis has a specific role in the long-term storage of spatial memory, the part of memory responsible for processing and recording information from the environment. “Whatever these new neurons are doing it is not controlling whether or not these animals learn. But these new cells are regulating the efficiency and the strategy that they use to solve the problem,” Gage explained to Science Daily.1

In previous studies, Gage and his team were able to show how certain activities trigger neurogenesis. For instance, increased mental and physical exercise led to an increased amount of stem cells differentiating into neurons.3 Many of these neurons did not survive, although continued stimulation increased the number that did. Zhang’s water maze study now provides an important tool for others to study the effects of decreased neurogenesis. Previous attempts using radiation and mitotic inhibitors shut down not just neurogenesis but all cell division, and thus led to contradictory results.

The significance of Zhang’s research on adult neurogenesis is well founded. There are over 5 million people in the U.S who suffer from Alzheimer’s disease and other neurodegenerative disorders. Studies such as these give hopes that there may be a way to influence memory function by stimulating neurogenesis with therapeutic drugs. When perfected, these methods will allow a debilitating disease, such as Alzheimer’s, to be cured with a drug, followed by physical and mental stimulation. Many neurodegenerative disorders have no cure, and symptoms can only be alleviated for a short period of time before damage becomes severe. These groundbreaking studies have the potential to save millions from the trauma of memory deterioration.

References

1. Science Daily. http://www.sciencedaily.com/releases/2008/01/080130150525.htm. (accessed Feb 26, 2008).

2. Shi Y, Chichung Lie D, Taupin P, Nakashima K, Ray J, Yu RT, Gage FH, Evans RM. Expression and function of orphan nuclear receptor TLX in adult neural stem cell. Nature. 1 Jan 2004.

3. Tashiro A, Makino H, Gage FH. Experience-specific functional modification of the dentate gyrus through adult neurogenesis: a critical period during an immature stage. The Journal of Neuroscience. 21 Mar 2007.

4. Zhang CL, Zou Y, He W, Gage FH, Evans RM. A role for adult TLX-positive neural stem cells in learning and behaviour. Nature. 21 Feb 2008.

Antibiotic Resistance Threat Demands Novel Research

by: Sarah Wu and Professor Yousif Shamoo

Abstract

Only recently has the U.S. public become aware of the dangers of antibiotic resistance. Overuse of antibiotics is chiefly responsible for the proliferation of super-resistant strains. An intimate understanding of resistance mechanisms will prove crucial for continued success against pathogenic bacteria. Traditionally, antibiotics have targeted processes such as cell wall synthesis and DNA replication. However, the decreasing effectiveness of optimizing existing antibiotics has prompted scientists to explore new strategies, such as using combinations of antibiotics, examining evolutionary pathways leading to resistance, and targeting metabolic processes.

Introduction

The problem of antibiotic resistance catapulted to the attention of the U.S. public when seventeen-year-old Ashton Bonds died in October 2007 from MRSA (methicillin-resistant Staphylococcus aureus), which he contracted from his high school locker room.1 Studies conducted by the Centers for Disease Control and Prevention estimate deaths in the U.S. due to MRSA in 2005 exceeded those of HIV-AIDS, Parkinson’s disease, emphysema and homicide.2 While traditionally associated with hospital ICUs and nursing homes, deadly bacterial infections emerged in schools, gyms, and day cares, alarming the nation.

Historically, antibiotics have demonstrated a remarkable effectiveness against bacteria. Many consider antibiotics as one of the most important discoveries in modern science. The discovery of penicillin by Alexander Fleming in 1928 prevented many deaths and amputations on the battlefields during WWII.3 Sulfa drugs not only accelerated research in the pharmaceutical industry but also were successful starting points for the creation of new drugs to treat a variety of diseases.4 Despite their amazing effectiveness, antibiotics are slowly losing their edge against pathogens, as seen by the recent rise of strains such as MRSA, which doubled from 127,000 infections in 1999 to 278,000 in 2005 and increased from 11,000 to more than 17,000 deaths.5

As medicine integrated antibiotics into common practice as a way to treat a variety of infectious diseases such as bronchitis, pneumonia, and syphilis, antibiotics have become the expected and preferred cure in the vocabulary of the general public for unpleasant symptoms. These drugs have come to be seen as the magic cure-all that doctors can easily dispense to their patients, who are increasingly expectant of instant treatments. According to rationalmedicine.org, physicians may over-prescribe antibiotics due to fear of lawsuits of negligence if drugs are not prescribed. In addition to facing pressure from drug companies to prescribe their medications, physicians risk losing their patients to other physicians who are less hesitant to prescribe antibiotics.6 Sore throat is usually caused by a viral infection (and is therefore unresponsive to antibiotics) with only 5-17% of cases caused by bacteria. However, a 2001 study by Linder and Stafford revealed that in the period from 1989 to 1999, there was a decrease in the use of recommended antibiotics like penicillin for treating sore throat and an increase in the use of nonrecommended broad spectrum antibiotics, such as extended-spectrum macrolides and fluroroquinolones.7 Another study showed that half of the antibiotic prescriptions written by emergency medicine physicians were for viral infections.8 Because of a variety of factors leading to faulty administration, antibiotics are not being used effectively.

Antibiotics are also widely used in agriculture. Frequently animals that are raised for consumption in the U.S. come from factory farms that maximize yield by placing as many animals as possible into a limited space. Close quarters such as these facilitate the spread of infectious disease, making it necessary to treat the animals regularly with antibiotics as a preventative measure. Antibiotics are also administered to make up for poor conditions like inadequate nutrition and imperfect caretaking.9 This is concerning as there have been reports of animal to human transmission of disease. A 2006 study found a case where a strain of MRSA from a family’s pig farm had infected a baby girl, demonstrating the ease with which resistant strains can arise in farm animal populations and spread to humans.10

Resistance against antibiotics is not a novel phenomenon in biology and has in fact occurred for millions of years. Streptomyces, a genus of bacteria that lives in soil and water, excretes a variety of anti-microbial substances into the nearby soil to prevent the growth of competing microorganisms.11 Penicillin mold has similar defenses. It is not surprising that we derive many of our antibiotics from nature.

If bacteria are quickly becoming resistant to antibiotics, why do pharmaceutical companies hesitate to develop replacement drugs? In short, drug development is very difficult, expensive, and unprofitable. It is estimated that the cost of developing one successful drug is around $802 million. Drugs take about 12 years to get approved by the FDA and only about one out of 5,000 substances are approved, a 0.02% success rate. The pharmaceutical company then has the rights to the drug patent for 20 years before the drug becomes generic, allowing anyone to produce it.12 In addition, the scope of bacteria against which most antibiotics are effective quickly narrows with increased usage. The drug company is under pressure to start selling as much of the drug as possible to recoup the cost of development, which means aggressive marketing to physicians. Physicians are subsequently faced with a dilemma, as they want to limit the use of antibiotics to maintain their effectiveness yet feel pressured into prescribing them. Currently, a large portion of research is dedicated to the mechanisms and selection of antibiotic resistance, which scientists attribute to evolutionary processes.

Evolution’s role in antibiotic resistance

Evolution is the continual adaptation of a population to its environment. Mutations at the nucleotide level influence the structure of the proteins to produce a phenotype that could confer increased resistance to antibiotics. When a naive population of bacteria is exposed to an antibiotic, the vast majority is eliminated. However, the surviving bacteria are resistant and will give rise to antibiotic-resistant progeny. Thus, repeated administration of the same antibiotic results in the proliferation of resistant phenotypes and decrease in antibiotic efficacy. Because bacteria are unicellular organisms that divide roughly every thirty minutes, they quickly adapt to environmental challenges such as antibiotics. The combined effect of many divisions and mutations gives rise to numerous resistance mechanisms.

Antibiotics: Mechanism and Response

Bacterial cell walls serve the critical function of maintaining osmotic stability, making their synthesis a favorite target of bactericidal antibiotics. Bacteria are classified into two categories based on the type of cell walls that they have: gram-positive or gram-negative (Fig. 1). Gram refers to the staining protocol developed by Hans Christian Gram in 1884.13 The crystal violet stain used in this procedure will color the outer peptidoglycan wall of gram-positive bacteria purple but leaves the gram-negative bacteria pink, as the peptidoglycan net is contained inside its dual membranes. This peptidoglycan net is made up of alternating units of N-acetylmuramic acid (NAM) and N-acetylglucosamine (NAG) which are cross-linked by the enzyme transpeptidase, also known as penicillin-binding proteins (PBP). The peptidoglycan net is a critical component to the bacteria as it maintains the high osmotic pressure inside the bacteria.

β-lactam drugs (Fig. 2), such as penicillin and ampicillin, are the most widely used antibiotics. β-lactams are a class of drugs that inhibit bacterial cell wall synthesis. Because their structure mimics the penultimate D-Ala-D-Ala pentapeptide chain of NAM, they function by being mistakenly used by the transpeptidases as a substrate, resulting in the acylation of the enzyme. The acylated transpeptidase is unable to hydrolyze the β-lactam and is thus not functional, which hinders cell wall synthesis. The cell wall becomes susceptible to the autolytic enzymes that degrade the cell wall and eventually becomes permeable to the environment, leading to bacterial lysis.14

Bacteria have responded in three main ways to β-lactam drugs. The active site of the transpeptidase can be changed to have a lower affinity for β-lactams. In gram-negative bacteria, the cell can also restrict the influx of drugs into the periplasm by altering the expression of outer membrane proteins to restrict drug entry or by the expression of antibiotic efflux pumps such as MexA.15 The most common mechanism of β-lactam resistance uses an enzyme to inactivate the drug before it can bind transpeptidase. This enzyme, known as β-lactamase (Fig. 3), is very similar in structure to the PBP and uses a strategically placed water molecule to hydrolyze the β-lactam ring of the drug, rendering it inactive. However, as new forms of β-lactam drugs have been introduced, bacteria have evolved more efficient forms of this enzyme.

Cell wall synthesis, however, is not the only common target of antibiotics. Fluoroquinolones (Fig. 2) inhibit DNA replication by stabilizing the enzyme-DNA complex created by the enzymes topoisomerase II (DNA gyrase) or topoisomerase IV, blocking DNA synthesis. Bacterial mechanisms for resistance to fluoroquinolones are very similar to those for β-lactam resistance. Mutations to the target sites of DNA gyrase and topoisomerase IV decrease drug binding, increasing expression of efflux pumps to remove the drug. Alternatively, losing porins in the membrane restrict entry of the drug into the cytoplasm.16

Current Research in the Field

For many years, the main response to developed resistance was to introduce a new form of the antibiotic. Unfortunately, little progress has been made recently in developing new effective compounds, eliminating the possibility of a quick score in terms of discovering a bacterial growth inhibitor.17 This necessitates the study of novel approaches to eradicating pathogenic bacteria.

Drug cocktails

Some researchers have pursued the idea of combining antibiotics in hope of a synergistic effect. The combination of two drugs is expected to lead to a more effective therapy, as is the case with the cocktail of drugs given to HIV patients. However, predicting the efficacy of drug combinations has proven to be difficult, making direct experimentation necessary for verifying efficacy. Cefotaxime and minocycline were found to be an effective combination against Vibrio vulnificus18, although the treatment of sepsis with the addition of aminoglycosides to β-lactams was found to have no advantage and actually increased risk of nephrotoxicity.19 One study characterized combinations of drugs as synergistic, additive, or antagonistic (depending on if the cumulative effect of the drugs is greater than, equal to, or less than the effect of their individual activity) and demonstrated how certain combinations of drugs could select against drug resistance. By conducting a competition assay between doxycycline-resistant and wild type E. coli at certain concentrations of doxycycline and ciprofloxacin, the wild type strain was found to have a growth advantage over the doxycycline-resistant strain.20 While this study was limited to sublethal concentrations of antibiotics, it suggests the potential to forestall drug resistance through combinations that select against the development of resistance.

Predicting the evolutionary trajectories of antibiotic resistance

Other research concentrates on the evolutionary processes of resistance. One study identified five key β-lactamase mutations that led to a 100,000-fold increase in resistance against cefotaxime. While in principle there are 120 (5!) possible pathways (also known as trajectories) to reach the final state of accumulating all five mutations, the study demonstrated that a large number of these pathways were already inaccessible as a beneficial mutation has a much higher probability of fixation in the population as opposed to a deleterious or neutral one. Other combinations of those 5 mutations did not increase resistance unless certain mutations preceded others sequentially. In the end, they found only 18 probable trajectories leading towards the super effective enzyme with five mutations.21 This study shows the surprisingly limited nature of evolutionary pathways and the possibly predictable nature of evolution. While this study was strictly theoretical in its execution, similar investigations into the evolutionary trajectories of antibiotic resistance at the molecular level are being conducted using in vivo models with the hope of understanding the mutational landscape of the development of drug resistance.

Targeting Bacterial Metabolism

Other studies have looked beyond conventional drug targets and in bacterial metabolism, suggesting that all antibiotics kill bacteria in the same way by causing them to produce hydroxyl radicals, which damage proteins, membrane lipids, and DNA. A methodical sequence of experiments probes this idea, starting with experiments that show the increased production of hydroxyl radicals in bacteria when treated with bactericidal antibiotics. The Fenton reaction, which produces hydroxyl radicals by the reduction of hydrogen peroxide by ferrous iron, is considered the most significant contributor of hydroxyl radicals. Bacteria were found to live longer when exposed to thiourea, a hydroxyl radical scavenger, as well as 2,2’-dipyridyl, an iron chelator that inhibits the Fenton reaction. To determine the source of the ferrous iron as either extracellular or intracellular, a knockout strain (∆tonB) was created with disabled iron import as well as a knockout (∆iscS) with impaired iron-sulfur cluster synthesis abilities. While ∆tonB exhibited no advantage against antibiotics, ∆iscS showed reduced hydroxyl radical formation and cell death, pointing to intracellular ferrous iron as the source of hydroxyl radicals. It is established that ferrous iron is released when superoxide damages the iron-sulfur clusters and the most superoxide formation comes from electron transport chain oxidation and conversion of NADH to NAD+. Gene expression microarrays revealed that upon exposure to antibiotics, NADH dehydrogenase I was a key upregulated pathway. Bacteria respond to hydroxyl radicals by activating RecA, which stimulates SOS response genes to initiate DNA repair mechanisms. RecA knockout strains had a significant increase in cell death compared to wild type. The study proposes the following mechanism: antibiotics stimulate oxidation of NADH, which induces hyperactivation of the electron transport chain, thus stimulating superoxide formation that damages iron-sulfur clusters in the cell. These clusters release ferrous iron, which is oxidized by the Fenton reaction, producing the hydroxyl radicals. Bacteria have SOS response mechanisms that are activated by the RecA gene, leading to the stimulation of SOS response genes. By knocking out this gene, they found the bacteria had a significant increase in sensitivity to antibiotics. The study demonstrated the possibilities of targeting the TCA cycle or respiratory chain when developing new antibiotics.22

Complementary strategies

While a large burden rests in the hands of the scientists and drug companies, there are steps the public can take to decrease the rate of proliferation of resistant strains. The federal Center for Disease Control and Prevention recommends not pressuring one’s health care provider to prescribe antibiotics and to follow directions when taking medication by only taking antibiotics for bacterial infections as well as completing the prescribed dose of antibiotics completely to avoid sparing bacteria that could potentially become drug resistant.23 Improving infection control would prevent the spread of resistant strains as rapid methods of identifying pathogens would lead to faster isolation of colonized patients, making it harder for disease to spread.24 The simple practice of washing hands can limit the spread of disease. Citizens can also advocate legislation that limits the use of antibiotics in the agricultural industry.
As observed in the laboratory and daily life, carelessness with the application of antibiotics results in the disappearance of our painstakingly acquired advantage against bacteria. But even with cautious usage, antibiotics will still select for resistant strains. Continuous ongoing research is critical to discovering successful novel treatment strategies.

References

1. Urbina, Ian. “Schools in Several States Report Staph Infections, and Deaths Raise the Alarm.” The New York Times. 19 Oct 2007.
2. Sack, Kevin. “Deadly Bacteria Found to Be More Common.” The New York Times. 17 Oct 2007.
3. “penicillin.” Encyclopædia Britannica. 2008. Encyclopædia Britannica Online. 23 Mar. 2008 .
4. Lesch, John E. The First Miracle Drugs: How the Sulfa Drugs Transformed Medicine. New York: Oxford University Press, 2007.
5. Klein E, Smith DL, Laxminarayan R (2007). “Hospitalizations and Deaths Caused by Methicillin-Resistant Staphylococcus aureus, United States, 1999–2005”. Emerg Infect Dis 13 (12): 1840–6.
6. Kakkilaya, B.S.. “Antimicrobials: The Rise and Fall”. rational medicine.org. 03/24/08 .
7. Jeffrey A. Linder and Randall S. Stafford
Antibiotic Treatment of Adults With Sore Throat by Community Primary Care Physicians: A National Survey, 1989-1999
JAMA, Sep 2001; 286: 1181 – 1186.
8. Diane M. Birnbaumer. Emergency physicians overprescribe antibiotics for viral respiratory infections. Journal Watch Emergency Medicine, April 2006.
9. “feed.” Encyclopædia Britannica. 2008. Encyclopædia Britannica Online. 23 Mar. 2008 .
10. Huijsdens et al. Community-acquired MRSA and pig-farming. Ann Clin Microbiol Antimicrob (2006) vol. 5 pp. 26
11. “Streptomyces.” Encyclopædia Britannica. 2008. Encyclopædia Britannica Online. 24 Mar. 2008 .
12. Crowner, Robert. “Drug Pricing vs. Drug Development.” Acton Commentary. 15 May 2002.
13. “Gram stain.” Encyclopædia Britannica. 2008. Encyclopædia Britannica Online. 23 Mar. 2008 .
14. Babic et al. What’s new in antibiotic resistance? Focus on beta-lactamases. Drug Resist Updat (2006) vol. 9 (3) pp. 142-56
15. Wilke et al. Beta-lactam antibiotic resistance: a current structural perspective. Curr Opin Microbiol (2005) vol. 8 (5) pp. 525-33
16. Chen et al. Molecular mechanisms of fluoroquinolone resistance. J Microbiol Immunol Infect (2003) vol. 36 (1) pp. 1-9
17. Projan. New (and not so new) antibacterial targets – from where and when will the novel drugs come?. Current opinion in pharmacology (2002) vol. 2 (5) pp. 513-22
18. Chiang et al. Synergistic antimicrobial effect of cefotaxime and minocycline on proinflammatory cytokine levels in a murine model of Vibrio vulnificus infection. J Microbiol Immunol Infect (2007) vol. 40 (2) pp. 123-33
19. Paul et al. Beta lactam monotherapy versus beta lactam-aminoglycoside combination therapy for sepsis in immunocompetent patients: systematic review and meta-analysis of randomised trials. BMJ (2004) vol. 328 (7441) pp. 668
20. Chait et al. Antibiotic interactions that select against resistance. Nature (2007) vol. 446 (7136) pp. 668-71
21. Weinreich et al. Darwinian evolution can follow only very few mutational paths to fitter proteins. Science (2006) vol. 312 (5770) pp. 111-4
22. Kohanski et al. A Common Mechanism of Cellular Death Induced by Bactericidal Antibiotics. Cell (2007) vol. 130 (5) pp. 797-810
23. “Antibiotic/Antimicrobial Resistance Prevention Tips”. Centers for Disease Control and Prevention. 03/24/08 .
24. Lowy. Antimicrobial resistance: the example of Staphylococcus aureus. J Clin Invest (2003) vol. 111 (9) pp. 1265-73

Molecular Dynamics: The Art of Computer Simulation

by: Yuekai Sun and Professor Jun Lou

Abstract

Molecular dynamics (MD) is a computer simulation technique that studies particle models by stimulating the time evolution of a system of interacting particles. MD is commonly employed in materials science, nanotechnology and biochemistry to study various processes at the atomic scale. The techniques of MD have also been adapted for use in fluid dynamics and astrophysics. In this article, we introduce the basic science and computational methods behind MD and explore some of its current applications.

Given a system consisting of n particles subjected to interactions described by a given force field, can the trajectory of every particle be predicted? 18th century French astronomer and mathematician Pierre-Simon Laplace said of solving the above problem, called the n-body problem, “Given for one instant an intelligence which could comprehend all the force by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.”

Molecular dynamics (MD) solves the n-body problem by numerically integrating the equations of motion for every particle. There are two common forms of MD, ab-initio MD and classical MD. Ab-initio MD solves the many-body problem for a system governed by the Schrödinger equation with the Born-Oppenheimer approximation (without this approximation, the Schrödinger equation cannot be solved even numerically for many systems). At every timestep, the Schrödinger equation for electrons is solved to compute the potential. The position and velocity of every nucleus are then propagated by numerically integrating the classical equations of motion. To solve the Schrödinger equation for electrons, various approximations derived with methods such as the tight-binding model or density functional theory are used. Ab-initio MD can simulate quantum mechanical phenomena, such as bonding between atoms and heat conduction in metals, but the computational cost limits the size of systems that can be studied with this model to a few thousand atoms.

Classical MD simplifies the ab-initio approach by replacing the quantum mechanical potential computations with a parameterized empirical potential function that depends only on the position of nuclei. This potential function is either fitted to experimental data or derived from the underlying quantum mechanical principles. Classical MD can be used to study systems with up to millions of atoms but cannot simulate phenomena that depend on electron behavior. We will restrict our discussion to classical MD.

The name molecular dynamics is misleading because given the almost universal nature of the many-body problem, MD can model any system that involves interaction between particles. MD is employed in materials science to study a variety of phenomena, including the behavior of materials under stress, crack propagation, and how various defects affect the strength of materials. MD is also commonly used in biochemistry to study the behavior of macromolecules. The techniques of MD have also been generalized to thermal science and astrophysics, where particle models are used to study various hydrodynamic instabilities and phase transitions in thermal science and the structure of the universe in astrophysics.2

Interaction computations

The computation of interactions between atoms is typically the most computationally expensive segment of a MD simulation. For a pair-wise potential (a potential described by a potential function that does not account for the environment of atoms), the interaction computations scale with order O(N2). Various methods have been developed that reduce the order to O(N) for short-range potentials (potentials that decay faster with respect to distance than the dimension of the system are considered short-range) and O(NlogN) for long-range potentials. The most common methods are the linked-cell and neighbor list methods for short range potentials and Ewald summation method for long-range potentials.

Both the linked-cell and neighbor-list method assume that the potential and its spatial derivative at the location of an atom can be approximated by the superposition of the potential contributions from atoms within a certain interaction cutoff distance. Mathematically, the potential function U(rij) can be approximated by the truncated potential function Û(rij).

But to compute the distance between every atom pair costs O(N2) calculations. The linked-list method decomposes the system spatially into cells of side length greater than or equal to the cutoff distance. Every atom only interacts with atoms in the same cell or an adjacent cell so only distances between atom pairs in the same and adjacent cells must be computed. Atoms are resorted into cells every 10 to 20 timesteps. This reduces the complexity of the computation to O(N) .2

The linked-list method is commonly used in parallel molecular dynamics codes because it simplifies communications between threads. Every thread is assigned a cell and is passed a list of the atoms in its cell and every adjacent cell every time atoms are resorted into cells.

Figure 1 Linked-cell method: To compute the interactions of the filled in atom, only atoms in the same (dark-shaded) and adjacent (light-shaded) cells are considered. Only atoms within the shaded disk interact with the darkened atom.

The neighbor list method creates lists of all atoms within a certain distance of every atom so every atom only interacts with atoms in its neighbor list. The neighbor list cutoff distance must be greater than the interaction cutoff distance and the difference determines the frequency at which the neighbor lists must be recreated. The neighbor list method also reduces the complexity of the computation to O(N) (2).

Figure 2 Neighbor list method: The neighbor list for the filled-in atom contains all the atoms in the light-shaded disk. Only atoms within the dark-shaded disk interact with the filled in atom.

For long-range potentials, the interaction cutoff distance must be very large for the potential at the location of an atom to be approximated by the superposition of the potential contributions from atoms within the cutoff distance. The Ewald summation method is the most common method and reduces the computational complexity to O(Nlog(N)) 3 by decomposing the system spatially into cells and assuming the interactions of an atom with every other atom in the system can be approximated by its interactions with every other atom in the same cell and their images in periodic cells tessellated along all dimensions.

Mathematically, this can be expressed as:

where N is the number of atoms in the same cell, nx, ny, nz are half the number of periodic cells in the x, y, z directions respectively, Ø(rij) is the part of the function that does not decay with respect to distance, L is the side length of the cell and d is the dimension of the system. The Ewald summation method then decomposes the above sum into a short-range term that can be summed using techniques for short-range potentials and a long-range term that can be summed with Fourier methods. A detailed mathematical introduction to the Ewald summation method is beyond the scope of this article and we refer the reader to 3 for a detailed introduction to the Ewald summation method.

Integration methods

The solution to an n-body problem can be chaotic and it is fruitless to seek an accurate trajectory beyond several “collision steps”. The main criteria for choosing an integration method for the equations of motion in MD are therefore energy conservation, the ability to accurately reproduce thermodynamic ensemble properties and computational requirement. Interaction computations are the most computationally expensive segment of an MD simulation and an integration method requiring more than 1 interaction computation per timestep is inefficient unless it can maintain the same accuracy with a proportionally larger timestep. This criterion disqualifies the common Runge-Kutta methods.

Two commonly used families of integration methods are the Verlet algorithm and its variations and predictor-corrector methods. The Verlet algorithm can be derived from the Taylor expansion of the position variable (r):

Adding the expressions for r(t+h) and r(t-h) and solving for r(t+h):

One drawback of the Verlet algorithm is the lack of a velocity term in the expressions, making it difficult to compute the atomic velocities. Common variations of the Verlet algorithm designed to correct this drawback are the leap-frog algorithm and velocity-Verlet algorithm.

The predictor-corrector family of methods (PC methods) all consists of 3 steps. First, atomic positions and velocities are propagated according to the Taylor expansions of the position and velocity variables. Then, the interactions are computed based on the new positions and the accelerations are compared to the accelerations predicted with the Taylor expansion of the acceleration variable. Finally, the positions and velocities are “corrected” according to the difference between the computed and predicted accelerations. Although PC methods require 2 interaction computations per timestep, they can maintain the same accuracy with a timestep that is more than twice as long.3

PC methods have mostly been discarded in favor of the simpler Verlet methods because Verlet methods give better energy conservation and are easier to implement. We refer the reader interested in a derivation of the basic PC algorithm to. 1

Constraint dynamics

Geometric constraints can be implemented in the context of the above integration methods to simulate rigid molecules. The rigid molecule approach is not appropriate for large molecules. Methods to simulate flexible molecules have been developed although we will not introduce them in this article. We refer the reader to1 for a detailed introduction to the simulation of flexible molecules. The most common method is the SHAKE algorithm and its variations. The SHAKE algorithm first advances the atomic positions and velocities ignoring the constraints. The new positions and velocities are then corrected by introducing a constraint force, the magnitude of which is determined by the displacement of the new positions from their constrained positions.

The kth constraint constraining the distance between two atoms can be expressed mathematically as:

where ri and rj are the positions of atoms i and j and dk is the bond length between them. These constraint equations represent forces that are described by their action on the dynamics of the system and are not derived from the potential. These forces can be expressed within the equations of motion as constraint potentials:

where n is the number of constraints and λ is a constant that must be computed in order for the constraint to be satisfied. We numerically integrate the equations of motion with the Verlet algorithm.

where is the unconstrained position of atom i. ri and rj must obey the constraint equations σk:

where rk is the constrained bond length. The value of λk can be solved for with standard root-finding techniques. In the SHAKE algorithm, the system of k equations is solved with iterative methods [1].

Thermostats and barostats

MD with the classical equations of motion simulates a system at constant volume and energy. Every timestep, a new state of the system, called a microstate, is generated. Because every microstate generated has the same volume and energy, MD with the classical equations of motion can be thought of as sampling from a set of microstates with a given number of atoms, volume and energy, called a microcanonical/NVE ensemble (so called because the Number of particles, Volume and Energy are constant).

It is often desirable to sample from a canonical/NVT (Number of particles, Volume and Temperature are constant) or isothermal-isobaric/NPT (Number of particles, Pressure and Temperature are constant) ensemble to maintain compatibility with experimental studies. Various methods have been developed to constrain temperature and pressure and are called thermostats and barostats respectively. Most modify the system dynamics by modifying the equations of motion. Common thermostat algorithms include the Berendsen thermostat and Nosé-Hoover thermostat and common barostats include the Parrinello-Rahman method.

The equations of motion for most thermostat algorithms take the general form:

where γ can be thought of as a coefficient that adjusts the system kinetic energy to match the desired temperature. The effect of a thermostat algorithm on system dynamics can also be described as a scaling of internal velocities (velocities relative to center of mass) at every timestep by a factor λ. Mathematically, this can be expressed as:

where is the unscaled internal velocity and h is the timestep.

Because the temperature cannot change instantaneously, λ(t,0) = 1.

If we let in (*), then the above expression agrees with (*).

The Berendsen thermostat is based on Newton’s law of cooling:

where T0 is the heat bath temperature and τ is an empirical coupling parameter that quantifies the strength of the coupling between the system and heat bath. The finite difference quotient is:

The velocity scaling factor at every timestep is determined by the ratio between the system temperature and heat bath temperature and the coupling parameter.

Note that the coupling parameter affects the system dynamics like a damping parameter. The Berendsen thermostat, therefore, does not generate a true canonical velocity distribution because the coupling parameter damps out system temperature fluctuations.

Both the Nosé-Hoover themostat and the Parrinello-Rahman method are based on the extended system approach. In this approach, additional virtual degrees of freedom are introduced into the system that can be used to regulate system properties.

A detailed introduction to the Nosé-Hoover themostat and the Parrinello-Rahman method is beyond the scope of this article. We refer the reader 4 for a detailed introduction to the Nosé-Hoover thermostat and2
for a detailed introduction to the Parrinello-Rahman method.

Conclusion

According to American physicist Richard Feynman, “all things are made of atoms and everything that living things do can be understood in terms of the jiggling and wiggling of atoms”. MD is a computer simulation technique that simulates the “jiggling and wiggling of atoms” and has been hailed as a way of predicting the future by animating nature’s forces. In this article, we introduced the basic method behind MD to provide the reader with the necessary background knowledge to understand an MD simulation.

References

[1]. Rapaport, D. C.The Art of Molecular Dynamics Simulations. Cambridge : Cambridge University Press, 2004.
[2]. Griebel, Michael, Knapek, Stephan and Zumbusch, Gerhard. Numerical Simulation in Molecular Dynamics: Numerics, Algorithms, Parallelization, Applications. Berlin : Springer, 2007.
[3]. Leach, A. R.Molecular Modeling: Principles and Applications. Upper Saddle River : Prentice Hall, 2001.
[4]. Frenkel, Daan and Smit, Berend.Understanding Molecular Simulation: From Algorithms to Applications. San Diego : Academic Press, 2002.
[5]. Hünenberger, Philippe. Thermostat Algorithms for Molecular Dynamics Simulations. Advanced Computer Simulation. Berlin : Springer, 2005, pp. 105-149.

Investigating Wavelength Dependence of Surface-Enhanced Raman Scattering

by: Timothy Kinney and Professor Bruce Johnson

Abstract

One of the most significant developments in the field of Raman spectroscopy was the discovery of Surface-Enhanced Raman scattering (SERS). Raman signals from the early days of SERS were known to be enhanced by a factor up to a million-fold, and more recently by extraordinary factors of up to 1012-14. To understand what this means we should understand how Raman scattering works, and then apply this understanding to the Surface-Enhanced case. SERS can be thought of as at least two kinds of enhancement: an electromagnetic enhancement which is well understood and a chemical enhancement which is not completely understood. Charge-Transfer has been shown to contribute some chemical enhancement, but more mysteries remain. We are working to characterize the wavelength dependence of the Anti-Stokes to Stokes ratio, thus deriving what contributions are made by metal-adsorbate-complex transitions approximately resonant with both the laser and surface plasmon frequencies in the absence of Charge-Transfer effects. Work by this group and by others indicates that anti-Stokes to Stokes ratios can be sensitive indicators of this chemical enhancement mechanism in SERS when it occurs.

Background: Raman Scattering

A new type of light-matter interaction was experimentally verified by C.V. Raman and K.S. Krishnan in 1928, now called Raman scattering.1 In contrast with Rayleigh scattering, it is the scattering of photons with a change in frequency. Though the effect holds for all wavelengths of light, it is easier to consider a monochromatic case. During a Raman event, an incident photon of a particular wavelength and energy interacts with the vibronic modes of a matter system. The term vibronic indicates a combination of electronic states and vibrational states. The energy of the system increases by an amount allowed by vibronic transitions and a photon having lost this exact energy is re-emitted. The molecule thus moves from a lower energy state to a higher energy state, while photons of a higher energy are absorbed and photons of a lower energy are emitted. This is called a Stokes shift, but each vibronic transition also has an Anti-Stokes shift. (See Figure 1.) In the Anti-Stokes case, the matter is already in an excited vibrational state. An incident photon interacts with the excited system, resulting in the emission of a higher energy photon and a loss of energy in the system as it returns to a lower energetic state. This happens less often and is dependent on how many molecules in the system are in an excited state, generally due to thermal interactions. The probability of any Raman event occurring is on the order of one for every million incident photons, but it is highly dependent on the polarizability of the matter, and the frequency of the incident radiation. The ratio of Anti-Stokes events to Stokes events is a useful quantity for observing resonance between vibronic transitions and incident radiation.

Since the advent of laser technology in the mid-20th-century, scientists have had access to collimated beams of nearly monochromatic radiation. From this, Raman spectroscopy was developed to probe the vibronic transitions of matter systems by examining the emitted Raman light with a spectrometer. If the energy of the incident radiation is known, the energy of the emitted radiation can be measured, and the energy gained by the system can be calculated exactly. This energy then corresponds to vibronic modes in the system, offering experimental verification of theoretical calculations. Like Infrared or Fluorescent Spectroscopy, Raman Spectroscopy identifies groups of molecules by their chemical bonds, but also provides additional information. The vibrational modes of matter are easily characterized and studied, and a large body of work dedicated to this field exists. In Raman spectroscopy, vibrational modes are called Raman modes. To display Raman spectra, the wavenumber of the incident radiation (usually a laser) is marked as zero. In what is known as the wavelength shift, the wavenumber of each Raman mode is shifted from zero, corresponding to the energy of that mode. For example, a carbon-carbon stretch occurs at approximately 1600 wavenumber shift. It will always occur at 1600 wavenumber shift, regardless of the frequency of the incident radiation. Changing the incident radiation changes the absolute wavenumber position, but not the relative shift. Note that varying the surrounding environment or the physical state of the matter may shift the Raman modes slightly. Figure 4 shows two Surface-Enhanced Raman spectra of 1-dodecanethiol. Several carbon-carbon modes are visible between 1050 and 1200 wavenumber shift.

Background: Surface-Enhanced Raman Scattering

Due to the rarity of Raman events, the Raman signal is often very weak relative to the intensity of incident radiation. The experimentalist desires very intense incident radiation to be ensured of a strong signal. However, over-heating the sample can lead to unwanted side effects, so a balance must be sought between strength and authenticity of the signal. One method to mitigate this is to place the matter in close proximity with a noble metal, such as gold, silver, or copper. Because the intensity of Raman events is dependent on the polarizability of the matter, the strength of the signal is closely dependent on the number of free electrons in the system. Noble metals have an abundance of free electrons that become polarized in resonance with the incident radiation and the molecules being studied, resulting in an increased intensity of Raman photons. This is called Surface-Enhanced Raman Scattering, or SERS. When light interacts with a noble metal, the free electrons on the surface tend to polarize in resonance with the light, and this collective motion is called a surface plasmon. The peak resonance of the surface plasmon depends on the type of noble metal and its geometry. Understanding surface plasmons and their wavelength dependence is critical to understand surface enhancement.

In the late seventies, electrochemists took spectra of pyridine adsorbed on silver, copper or gold electrodes electrolyte solutions. 3-5 Scanning the potential of the electrode (by comparison with Saturated Calomel Electrode), they demonstrated variations in the Raman signal. It was discovered that oxidation/reduction cycling would dissolve and redeposit monolayers of metal, augmenting the SERS enhancement by two orders of magnitude. This additional enhancement was attributed to molecular roughening of the surface of the electrodes.6 The SERS intensity was found to increase with the number of deposited monolayers for small numbers of layers.7 Allen et al. were able to show that multiple mechanisms contribute specified magnitudes to overall enhancement. They suggested that approximately 3.5 x 101 enhancement could be contributed from a surface-roughness mechanism, and 2 x 103 enhancement could come from roughness-independent mechanisms for pyridine adsorbed on copper electrodes.8

Wavelength Dependence of SERS: Multiple Mechanisms

Pockrand et al. determined that the multiple mechanisms were not limited to substrate activity. They collected Raman spectra of pyridine on evaporated silver films and on silver electrodes, and compared SERS-active films with SERS-inactive films investigating wavelength dependence of Raman intensities. 9 They explicitly noted that excitation profiles could be plotted showing SERS intensities dependent on the laser excitation wavelength, with a single maximum in the visible region for each vibrational mode. (See Figure 2.) Their paper demonstrated that all the excitation profiles they took have similar resonance behavior, but that the curves shift to greater energy of the incident light with the increasing energy of the vibration. The surface plasmon enhancement of silver was qualitatively the same with evaporated silver films and with silver electrodes, ruling out mechanisms solely dependent on the substrate. Regarding the adsorbed molecule, they found that doublet signals were obtained from thick layers of pyridine on SERS-active films; one of the pair correlated to the bulk pyridine and the other to a red-shifted version of the same vibrational mode in the surface pyridine. This corroborates the conclusion that the SERS response for pyridine is dependent on both surface plasmon activity attributed to the silver substrate and excitation energy of the laser. Furthermore, the large enhancement factors from SERS-active films could not be described by local field effects alone, and Pockrand concluded that a chemical contribution must play a role.

The multiple mechanism picture has prevailed over the years, leading to the proposal of various models, more or less dependent on specific data obtained from SERS experiments. In general, it is now well-accepted that a local-field electromagnetic enhancement occurs, which resonates with vibronic modes in the molecule and thus substantially increases the intensity of Raman scattered radiation. This mechanism requires that the molecule’s Raman-shifted vibronic frequencies fall within the spectral width of the substrate surface plasmons, although direct proximity between metal and molecule are not required. 10, 11 It is also well-accepted that this mechanism does not completely describe SERS. For physisorbs/chemisorbs, molecule-specific chemical effects are found in enhancement patterns and intensities. 10 A Charge-Transfer mechanism, requiring direct proximity between metal and molecule, has been suggested to explain these.11-13 Charge-Transfer involves the overlapping of molecular orbitals to provide pathways for the migration of electrons from the metal to the adsorbate, or vice versa. This is thought to provide enhancement by modifying the Raman polarizability tensor, causing increased intensity in Raman scattered radiation, though some scientists argue that the Charge-Transfer mechanism should be regarded as an altered adsorbate complex rather than an enhancement mechanism.11

Early work with electrodes, colloids and island films was insufficient to fully understand the complexity of SERS substrate plasmon resonance activity. Researchers called for experiments with increased control of surface roughness, but technology was lacking. 8 A solution to this problem has been developed at Rice University and involves the fabrication of nanoscale particles, like nanoshells. The study of nanoparticles has led to major advancements in the understanding of SERS, such as Jackson et al. recently investigating the relationship between nanoshell geometry and SERS enhancement. 14 Using nanoshells, a new monodispersed substrate, Jackson et al. showed that, in particular for those modes which the laser is resonant with, the Raman Effect is enhanced proportionately with the density of the nanoshells. 14 In other work by the same authors, it was discovered that SERS intensity was linearly dependent on the density of the nanoshell substrate, proving that the SERS response in their experiments was due to single nanoshell resonance response, not dimer resonances. 15 It is important to note that tuning the laser wavelength to excite single nanoshell resonances or dimer resonances results in data corroborating one theory or the other. This reinforces the notion that wavelength dependence is critical to understanding SERS mechanisms.

Maher et al. specifically investigated wavelength dependence of SERS on multiple substrates by plotting the anti-Stokes to Stokes ratio for p-mercaptoaniline (pMA) at different excitation energies and discovered that asymmetries arise. 16 At a fixed temperature a calculable amount of anti-Stokes emission is usually expected based on the thermal activity of the sample. They used the ratio of anti-Stokes to Stokes intensities as a sensitive indicator of resonant conditions involving both the metal and the adsorbate molecule. They argued that this asymmetry is systematically structured according to underlying resonances that need more than a single wavelength to be seen. One series of data they took for laser line 633 nm produced a higher anti-Stokes to Stokes ratio than expected, and another series using a 514 nm excitation produced a lower ratio than expected. (See Figure 3.) This may correspond to an anti-Stokes emission intensity peak around the 585 nm to 610 nm region. Their data lacked the excitation energy density necessary to determine this with certainty, but they did show that the anti-Stokes to Stokes ratio is a potentially valuable means of mapping the resonant interaction between the probes and the metal.

Advancing Our Understanding

We anticipate that in some cases unexpected resonances (called “hidden” by Maher et al. 16) for the metal-molecule-complex will be observed, even in the absence of Charge-Transfer processes. We are specifically interested in 1-dodecanethiol, though we also intend to do experiments with p-mercaptoaniline, due to the other work that has been conducted at Rice University. 10,16 Initial observations for the 1-dodecanethiol and gold colloids system using a Renishaw Invia Raman Microscope have been made. (See Figure 4.) We are currently building a Tunable Raman Darkfield Microscope system to restrict the region of measurements to those where the gold colloids have deposited in a smooth and dense manner. Using a dye laser, we can tune continuously from 558 nm to 606 nm, providing good coverage of the visible spectrum in the region where gold colloids enhance strongly. Compiling excitation profiles as a function of excitation energy, we will calculate the theoretical thermally-allowed anti-Stokes to Stokes ratio for the sample and then examine deviations from this result. Building on the work of Gibson et al. (see subsequent section of this paper), 10 we may prove or disprove the theory of hidden resonances for the metal-molecule-complex in the absence of Charge-Transfer for pMA attached to gold nanospheres and nanoshells. This work can later be applied to other substrates and molecules to provide data to support the general case.

Theoretical Support

As theoretical support for our work, we turn to Gibson et al. who independently ran theoretical density-matrix calculations for pMA and obtained results corroborating the notion that unexpected resonant behaviors between molecule-metal-complex may play a significant role in Surface Enhancement. 10 Their model was able to obtain anti-Stokes resonant excitation behavior in the absence of Charge-Transfer, supporting our hypothesis that these unexpected resonances occur when the anti-Stokes emission is in resonance with vibronic transitions of the molecule-metal-complex. Gibson et al. suggest that anti-Stokes emissions which are resonant with a metal-adsorbate molecule may provide anti-Stokes intensity peaks which then fall away in intensity on either side of the peak. Their calculations agree with the work of Maher 16 and may explain why his observed anti-Stokes to Stokes ratio is asymmetrical to thermal expectations; lower or higher, depending on laser excitation energy.

Acknowledgements

The author would like to thank Rice University and the Rice Quantum Institute. He also acknowledges funding provided by the National Science Foundation and hopes that Research Experience for Undergraduates programs will continue to be funded for the long-term development of research sciences.

References

1. Long, D.A. “Raman Spectroscopy”, (McGraw-Hill, Great Britain, 1977).
2. Image from Vrije University Brussels, 2007, Dec 18, . Accessed 2008 Mar10. Axis labels added.
3. M. Fleischmann, P.J. Hendra and A.J. McQuillan, Chem. Phys. Letters 26, (1974) 123.
4. U. Wenning, B. Pettinger, H. Wetzel, Chem. Phys. Lett. 70, (1980) 49.
5. J. Billmann, G. Kovacs and A. Otto, Surface Sci. 92, (1980) 153.
6. D.L. Jeanmarie and R.P. Van Duyne, J. Electroanal. Chem. 84, (1977) 1.
7. B. Pettinger and U. Wenning, Chem. Phys. Letters 56, (1978) 253.
8. Craig S. Allen, George C. Schatz, Richard P. Van Duyne, Chem. Phys. Letters 75, (1980) 201.
9. I. Pockrand, J. Billman, A. Otto, J. Chem. Phys. 78 (11): 6384-6390 1983.
10. J.W. Gibson, B.R. Johnson, J. Chem. Phys. 124, 064701 (2006).
11. M. Moskovits, Rev. Mod. Phys. 57 (3): 783 1985.
12. M. Osawa, N. Matsuda, K. Yoshii, I. Uchida, J. Phys. Chem. 1994, 98 12702-12707.
13. J.R. Lombardi, R.L. Birke, T. Lu, J. Xu, J. Chem. Phys. 84 (8) 4174, 1986
14. J.B. Jackson, S.L. Westcott, L.R. Hirsch, J.L. West, N.J. Halas, Appl. Phys. Lett. 82, 257 (2002).
15. J.B. Jackson, N.J. Halas, Proc. Nat. Acad. Sci. USA 101, 17 930 (2004).
16. R.C. Maher, J. Hou, L.F. Cohen, E.C. Le Ru, J.M. Hadfield, J.E. Harvey, P.G. Etchegoin, F.M. Liu, M. Green, R.J.C. Brown, M.J.T. Milton, J. Chem. Phys. 123, 084702 (2005).