Molds The Effects of Wet Weather to Crops.

Amber Coulters article  Cool, Wet Fall Dampens Large Harvest  discusses the recent damage of wet weather to the harvest this season. Basically, quality is affected greatly and in turn, becomes equally harmful to animals. As for humans, the molds from these crops are perfectly safe.
   
The biological aspect of the article lies in its explanation of the molds that sprout during such wet and cool weather conditions in produce such as grains, as well as their effects on farm animals such as poultry and hogs since they use the same produce for livestock feeds. Basically, the article discusses two types of molds that grow on grain and affect its quality. Diplodia is the most prevalent among the two and is by far less harmful to farm animals, but it can still reduce grain quality. The other one is Gibberella which is quite harmful to livestock but not to humans. These two molds sprout in cooler and wetter regions around Dubois county. According to the article, a slow yet effective solution to the problem is drying it using drier grains and fans to prevent the mold from further spreading.
   
In my opinion, it is difficult to prevent molds from growing in grains during a cool and wet weather. They seem to thrive in such weather conditions and make them spread faster. Although the drying solution may be somewhat effective, it cannot fully prevent the spread of mold and the resulting quality loss in grains. It would be a waste to throw away poor quality grain, but it would also be quite unreasonable to use it to feed livestock and sell to humans. Adding safe chemicals may help reduce the occurrence of molds, but only if such products are produced. Seeing that livestock avoid feeds made from grains of such quality, it would be better not to sell such molds to buyers.

Ophrys apifera.

Bee orchid is the common name of this short-lived perennial plant that occurs mostly in  the pastures, field boarders, banks on chalk or limestone, recently disturbed soils and calcareous- and base-rich portions of the Central and Southern Europe, particularly in Britain, and in North Africa. In terms of its taxonomical features, Bee orchid belongs to the Family Orchidaceae, Genus Ophrys, and with the specific epithet of apifera, hence, the scientific name of Ophrys apifera. The common name of the plant is derived from its distinct morphological feature of having a flower that resembles a female insect and that this flower also has the capability to emit female-like insect pheromones in order to attract and disturb the pollinators.

Figure 1 shows that picture of the flower of the bee orchid and its general morphological features. In congruence to the common features of the members of this phylum, Bee Orchid also has roots, stems, leaves and flowers which all help the plant in coping with the changes and demands of its environment. The role of the flower of the Bee orchid is already reflected in its role in cross-pollination but to further elucidate the role of the other anatomical structures, it will be necessary to discuss the physical and characteristics for propagation of this plant species.

Fig. 1. The morphological features Ophyrys apifera with emphasis on the insect-like flower
The Bee Orchid thrives in areas of light shade, with a pH of 6.1 to 7.8 (mildly acidic to mildly alkaline), dry to moist soils, and those soil structures with sandy, loamy to clay features. With respect to plant cultivation, it can be propagated asexually via transplanting, and through the usage of tubers, rhizomes, corms or bulbs. In terms of sexual reproduction, the usage of seeds and self-fertilization, the ability of a plant to fertilize itself provided that the male and female reproductive organs are present, methods can allow this mode of reproduction. The role of the anatomical structures, specifically roots, stems and leaves, are well-emphasized in the asexual mode of reproduction of Bee Orchid because of the fact that they are actually the specific plant parts which are directly used for propagation. In general, it can be said that the role of the anatomical features are evidently seen in terms of reproduction and in the sustenance and support of the pant to thrive in areas of diverse physical properties.

Deforestation

The term deforestation connotes transformation of forested land to permanently cleared land or to a shifting-cultivation cycle1. It involves permanent destruction of forest land by virtue of human activities such as logging and burning of trees in forested regions.

Global warming
Deforestation and destruction of rainforests have several adverse consequences, the most significant being global warming, which occurs due to increased atmospheric concentrations of greenhouse gases, in turn, raising the global mean temperature. Carbon-dioxide or CO2 is the main greenhouse gas. Trees absorb CO2 reducing its concentration in the environment. Conversely, forest clearance and wood burning add to the concentration of CO2 in the atmosphere. Destruction of forests implies lesser trees to absorb the greenhouse gas promoting global warming. Research indicates that deforestation, biomass burning and other land use practices account for over 18 percent of enhanced radiative forces causing global warming, far outweighing the effects of carbon emissions from planes, automobiles and factories  With the extent of deforestation worldwide, its impact on global warming knows no bounds. For example, 500 million tonnes of CO2 were released into the atmosphere as a consequence of the 1987 burning of the Amazon rain forest. In another instance, the forest fires of Indonesia in 1997 consumed over a million hectares of forest land created a cloud of smog over all of Southeast Asia, from Thailand to Philippines, for over a month, in addition to their effect on global climate
  
In addition to the broader impact of rainforest deforestation on climate change, its most immediate effect is soil erosion.Tree roots make gaps in soil to enable rain water to move through the soil before getting absorbed. The removal of trees and use of heavy machinery for logging compacts the soil, fills the spaces diverting air and water to the roots of plants making their growth and that of new plants difficult.
   
Only 10 percent Amazonian soils are of high-quality, rich in nutrients and deforestation and removal of vegetation causes these soils to be washed off easily by rainwater. Increased rainwater run-off further causes compaction, sheet erosion, surface lowering and gulleying on bare and agricultural lands decreasing crop yields of annual and perennial crops like corn, maize, coffee, black peer and silvicultural plantations, among others. About 860 million tons of top soil is lost by Costa Rica annually.8 Although loss of topsoil can be made good by importing huge quantities of fertilisers, purchasing the same poses a huge monetary burden on the economy. In Ivory Coast, forested slope areas are being lost at the rate of 0.03 tons of soil per year per hectare. While cultivated slopes lose 90 tons per hectare annually, the rate for bare slopes is 138 tons per hectare. Soil degradation and erosion have posed serious concerns for environmentalists, with studies indicating a 10 percent loss in soils natural fertility in the latter half of the 21th century alone. The worst, among these losses, occurred in Europe where 17 of soil has been damaged by human activity such as mechanized farming and fallout from acid rain. In Mexico and Central America, 24 of soil is highly degraded, mostly as a consequence of deforestation.
  
Biodiversity refers to the variability among living, inter alia, terrestrial, marine and other aquatic systems and the ecological complexes of which they are part this includes diversity within species, between species and of ecosystems.The biodiversity on our planet is the result of 3.5 billion years of evolution and manifests itself in millions of distinct biological  Rainforests provide a unique environment for the existence of biodiversity. Deforestation, however, destroys this unique environment harming plants, fauna and insect species, some of which are still not catalogued. Consequently, many plants and animal species are on the verge of extinction.
   
Loss of biodiversity has severe effects on mankind and the planet, such as a reduced gene pool, which in turn could be fatal to the future of humanity. Associated with loss of biodiversity is the loss of medical benefits derived from flora and fauna. For instance, the rosy periwinkle plant has helped in the production of two anti-cancer drugs.13 Majority of  the worlds medicines are made from species discovered in tropical forests, and their destruction implies loss of possible future medicines for diseases like AIDS and cancer.
   
Rainforests are also the source of food, shelter, nourishment, culture, recreation and relaxation, and livelihood for various indigenous and non-indigenous people and their destruction will deprive future generations of these resources. 

   
When rainforests are cut down or destroyed, numerous climatic consequences ensue. These climatic changes have been enlisted below.

Removal of previously moist forest soil In the absence of trees and canopy of tree leaves, exposure to sun rays results in evaporation of moisture in soil, converting moist soil into dry and cracked soil. Dry soil, in turn, reduces cloudiness over continents causing a large amount of solar radiation to reach the ground and get absorbed by it, increasing atmospheric temperature. Dramatic Increase in Temperature Extremes Trees provide shade and moderate temperature. Their absence, results in significant increases in day temperature of over 40 degrees Fahrenheit and major falls in night temperatures.
                                       .
Moist Humid Region Changes to Desert Transformations of previously moist soil into dry soil, and consequently into desert sand is another climatic repercussion of deforestation. Moreover, as soil dries out, frequency of dust storms increases and soil loses agricultural and commercial value.
No Recycling of Water Deforestation diminishes recycling of water inland, reducing rainfall in the interior of continents. When rain falls on dense forested regions, about one-fourth of it runs-off into the sea while three-fourths evaporates directly or through transpiration into the atmosphere. With deforestation, the ratio is reversed, inhibiting natural recycling of rain water. 

Less Carbon Dioxide and Nitrogen Exchange Rainforests play a vital role in carbon dioxide exchange process and rank second only to oceans as atmospheric sinks for the green house gas. Deforestation is responsible for over 10 percent of current greenhouse gas emissions, which trap heat in the atmosphere causing increases in global temperature. More Desertification Desertification refers to the formation of deserts in arid and semi-arid regions as a consequence of overgrazing, deforestation and climatic change.Deforestation causes run-off of rainfall and intensified soil erosion, which result in desertification. The seriousness of the problem depends on soil characteristics and topography.

Other Effects Forests are sources of clean water and air, must for human survival. Additionally, rainforests provide many aesthetic, recreational and cultural rewards. These benefits diminish as each rainforest is cut or destroyed, causing major social and economic repercussions for the entire world.
Introduction to Brazilian problem and Personal Analysis
  
In many tropical countries, deforestation occurs as a result of subsistence agriculture. In Brazil, however, only one third of deforestation can be attributed to subsistence cultivators while majority of deforestation results from cattle ranches and clearing of land for pasteurization for commercial and speculative interests, misguided government policies giving subsidies to cattle ranching projects, inappropriate World Bank projects, and commercial exploitation of forest resources. To prevent further destruction of forests, swift action must be taken to address the aforementioned issues. Focusing solely on the promotion of sustainable use by local people would neglect the most important forces behind deforestation in Brazil. A major hindrance to these conservation efforts is the link between deforestation and economic health of the country. In the past, heavy deforestation in 1993-98 paralleled economic growth while conservation efforts coexisted with economic slowdown. During economically turbulent times, government frequently grants subsidies and tax relaxations to ranchers and developers to expand their pasturelands and operations and undertakes colonization programs, making it profitable to convert natural forests to agricultural and pasturelands, thus, encouraging deforestation.    

Large landowners clear vast sections of the Amazon and sometimes plant it with savannah grasses for cattle pasteurization. Land may also be cleared for investment purposes, during periods of high inflation, when pasture prices exceed forest land prices, a condition again promoted by faulty government policies.
   
Deforestation in Brazil Studies indicate that majority of deforestation, around 60-70 percent, in Brazil is accounted for by cattle ranchers, with small farmers owing up to 100 ha of land accounting for thirty percent of this value while the other 70 percent can be attributed to large ranches containing over 1000 ha.23 Small amount of deforestation also results from subsistence agriculture of shifting farmers.
   
Large-scale farming or soybean farming, which usually takes place in cerrado grasslands and cleared areas, outside the main forest pays little contribution to total deforestation. Logging accounts for less than 4 percent of direct deforestation and is more closely linked with forest degradation.24 However, logging is closely correlated to future clearing for settlement and farming.

Arguments to conserve forests of the world Importance of Forests 
Forests carry a different meaning and value for different people. For some, they are a means of recreation while for others their very source of livelihood. Forests are the source of important natural resources such as timber, minerals, fresh water and medicinal plants. Over 1.6 billion people rely on ecosystem services that forests offer, including food, clothing, medicine, shelter and subsistence agriculture.25 Trees absorb carbon dioxide purifying atmospheric air and regulating world climate.
   
Forests are essential for life on this planet and are home to most of the worlds biodiversity and endangered species. Despite their value for every form of life, over 36 million acres of forest land is lost every year causing adverse repercussions for the environment, wildlife and people dependent on forests for their survival.
   
Given the value of forest to human and animal life and the extent of deforestation, it is the need of hour to manage, protect and restore the worlds forests.
                                       .
To sustain life on the planet and ensure continued access to forest resources, it is essential to manage forests. World Wildlife Fund (WWF) has undertaken forest management programs to manage nearly 540 million acres of forest land in socially, environmentally and economically responsible ways.28 These initiatives aim at forest management by increasing credible certification of forests, curtailing illegal logging, couraging companies to source their forest products from well-managed forests and help communities acquire greater control over their forests.
                          
Protect    Around 11 percent or over 1 billion of the worlds forested areas were designated as protected by 2005. These forests include forested areas of Amazon, the Congo, China, Indonesia and Russia. WWF aims to make another 185 million acres of the worlds forests protected by 2010. 30 These include a wide range of forests from mangroves to dry forests, from Peru to Madagascar. The WWF is also creating forest networks and undertaking forest landscape restoration projects to ensure proper maintenance and connection between fragmented protected areas to enhance resilience of forests, enable animals and plant life to interchange, and create a healthier, thriving ecosystem.
                                       .
Despite the loss of nearly half of the worlds forest, many forested areas are still being damaged and destroyed on a regular basis. WWF has undertaken many programs to restore degraded forests to a more authentic state. Their restoration not only involves planting more trees, but also returning forests to a state where they can continue to provide products and services such as improved water quality soil stabilization access to food, medicines and raw materials and stable sources of income for local people.31 WWF has undertaken forest restoration efforts combining human benefit with biodiversity conservation at the global level in association with governments, forest industry and local communities. Massive research is also being conducted to formulate recommendations on issues like monetary investment in tree planting and forest restoration.

The Use of Antisense Procedures in the Genetic Engineering of Plants for Crop Improvement

Crop improvement is the main strategy for a farmer. After rediscovery of Mendels laws of Heredity at the very beginning of the last century, this got a scientific base. Since then several new principles and technologies for proven and promising and potential applications have come into practice. Initially plant breeding between two desirable parents by proper selections in the subsequent generations was the main strategy. During the last thirty years deeper molecular biology insights have opened new opportunities. New technologies for foreign DNA movement into a host crop plant have been developed. This was achieved by introducing a DNA sequence extracted, developed or designed for a desired trait into a cell of the host crop plant cell and regeneration of a whole plant thereof. Such a technology though welcome had its limitations that this also many undesired characters in the host. Advances in a closer and broader understanding of the whole nucleic acids has made possible a method which facilitates desirable alterations within a crop plant genome itself.   

Antisense Technology  Conceptually this implies shutting (silencing) a gene from producing the corresponding protein for its effects such as function, shape andor color of  an organ. In other words, a molecule that interacts with the complementary strands of nucleic acids is called the antisense molecule. For example in 1994, the polygalacturonase gene (responsible for fruit ripening) in tomato was suppressed to delay the ripening dependent softening of the fruit, thus prolonging a the shelf life
The Antisense Concept - Antisense RNA concept is reverse of the normal process in which the messenger RNA (complementary mRNA) is transcribed from DNA. From this mRNA, in a normal situation an amino acid sequence or a peptide is translated finally to form a protein. On the other hand in the ANTISENSE technology, the translated product, mRNA, is not allowed to proceed further in the step of translation. This mRNA is made to produce its own complementary strand and thus form a double stranded nucleic acid. This is thus disabled from recognition by the cellular protein synthesis machinery. Many times the double stranded nucleic acid sequence is not stable and gets disintegrates soon after formation.

Creating Resistances to Viral Diseases - Resistance to Barley Yellow Vein Mosaic Virus (BYMV, a Poty virus group  a viral threat to many leguminous and solanaceous crop species and other family crops) was created by creating an Antisense RNA from C-terminal part of its coat protein gene. This was attached to the 3- prime side of the total non-coding region of the BYMV. This was then attached to the Cauliflower Mosaic Virus 35 S promoter sequence (CAMV 35 S). The CAMV 35 S promoter activates the functioning of the genetic sequence under its control. The total structure was then introduced  into Nicotiana benthamiana. The thus engineered tobacco plants were allowed to proceed through normal sexual cycles for seed production. The resultant progeny when challenged with the BYM Virus showed mild resistant reaction unseen on the non-engineered progeny.
The Success -    After several experimentally validated proofs of feasibility and success of this concept, this has been successfully employed in many other area of crop improvement. They include over-production of flower color in ornamentals, viral resistances in many other crops such as papaya by the ring spot coat protein has been possible.
  
A closely resembling technology is that of the RNA interference (RNAi) technology has been developed and potential uses have been found in for increased tolerances to biotic and abiotic stresses in crop plants. In humans this idea is being applied in the context of diseases.

Genetic transformation.

In recent years the use of novel methods for transfer of genes over a wide range of organisms, leading to the production of transgenic plants, has been recognized as one of the thrust areas of biotechnology. Gene transfer methods in plants make use of a variety of target cell types, which include cultured cells, meristem cells (it is a type tissue) from immature embryos, shoots or flowers and pollen or zygotes through chloroplast or nuclear transformation (P K Gupta). The uptake of foreign DNA or transgenes by plant cells is called transformation. A variety of techniques have been used to introduce transgenes into plant cells. Stable transformations may be either non-integrative, or integrative. In non integrative stable transformations, the transgene is maintained (chloroplast transformation). But such transformations are not passed to the next generation. On the other hand integrative stable transformations results when the transgene becomes integrated into the plant genome (nuclear transformation), these integrations are heritable (B D Singh).
Chloroplast transformations Transformation of chloroplast genome was successfully achieved in Nicotiana tabacum.  The plants carrying transgenic chloroplast are often described as transplastomic plants. Plastids transformation was first reported in chlamydomonas and then tobacco. These plants are preferred over transgenic plants (integrative stable transformations) for the following reasons
 (i) Several preferred genes reside in chloroplast genome, and are therefore suited to express efficiently in the chloroplast only.
 (ii) Protein synthesis machinery in the nucleus and the cytoplasm is not suited to the transcription and translation of desirable microbial genes that are often intended to be transferred in the transgenic crops. In contrast, the protein synthesis machinery of chloroplast resembles that of prokaryotes, so that the genes of prokaryotic origin are appropriately expressed, when transferred to a chloroplast.
(iii) The transgenes transferred to the chloroplast show more expression which leads to the accumulation of foreign proteins in the chloroplast.
(iv) Multiple genes, which associated with complete biosynthetic pathway, transferred to the chloroplast genome in a single phase event, otherwise called as transgene stacking which is not possible in nuclear transformation. This gives a greater opportunity to express entire pathway in a single event.
(v) Due to shift of loci from one position to another or due to shift of another segment in vicinity of locus expression will change. This is called as position effect, which is absent in plastids.
 (vi) Transplastomic plants are eco friendly when compared to transgenic plants. They also eliminate the toxic effect of transgene on useful insect fauna like butterflies etc., which may ingest the transgenic pollen.
(vii) Gene silencing will be absent.
(vii) Since transgenic products are localized with in the chloroplast pleiotropic effect will be absent.
Due to the above benefits chloroplast transformation exploited in so many crops like Chinese cabbage, rice, potato, tomato, rapeseed, carrot, cotton, soybean, lettuce etc.,
Production of recombinant protein this involves the following steps
Identification of the desired gene of a particular protein
Isolation of the desired gene or DNA fragment.
Insertion of the geneDNA segment in a Suitable vector. A vector is a DNA molecule capable of autonomous replication that is used as a carrier of the DNA segment to be cloned.
Introduction of the recombinant DNA into a suitable host (The introduction of recombinant DNA into a host is called transformation).
Selection of the transformed host cells.
Single-molecule force spectroscopy (SMFS) measurements are invaluable techniques which have gained increasing acceptability in recent times (Bustamante et al, 2000 Clausen-Schaumann et al, 2000 Lavery et al, 2002). Commonly used to quantify the forces between covalent bonds, nucleic acids and receptor-ligand pairs, SMFS seeks to determine the magnitude, in piconewtons, of forces which hold single molecules together (Albrecht et al, 2003 p.367 Moy, Florin  Gaub, 1994). The information obtained from the measurement of these forces can then be used to determine the functional characteristics of the molecules under study as well as conclusively map the inherent binding features. In brief, conventional SMFS entails the disruption of intramolecular or intermolecular bonds in single molecules using rupture force followed by measurement of this force using a cantilever spring. Meaurement using the cantilever spring is usually applied when the atomic force microscopy (AFM) technique is used. Measurement of the force can also be done using beads which are packed in magnetic or optical traps (Oesterhelt et al, 2000 p.143-6 Hugel et al, 2002).
In a bid to optimize the utility of this method, 2 different improvised SMFS techniques have been described. The first of these methods is the high-throughput single-molecule force spectroscopy (HT-SMFS) for membrane proteins which was described by Bosshart et al (2008). The second method, a Programmable Force Sensor technique, was described by Albrecht et al (2003).
This essay evaluates the 2 SMFS techniques in detail. Specifically, the principles of the 2 methods, their utility and uses are discussed. In particular, the 2 methods are compared and contrasted and their relative strenghts and weaknesses delineated. The conclusion is that the HT-SMFS is a more useful technique that the programmable force sensor method.
The conventional SMFS Method
The conventional SMFS technique is usually carried out in several stages. The first stage of SMFS involves the localization of the membranes which have the integral protein that is being studied. This is done through atomic force microscopy (AFM) imaging.  Thereafter, the AFM tip is inserted into the membrane protein then removed after some time. Strong attachment to the cantilever by the protein is resolved when the tip is removed. This resolution is termed as unfolding. Insertion and removal of the tip enables the computation of force-distance (FD) curves. The next stage entails the processing of data and this is done in 4 steps. First, the F-D curves which reprrsent the unfolding statistics are obtained by coarse filtering (Bosshart et al, 2008 p.208).
According to Bosshart et al (2008 p.208), the operator who is tasked with coarse filtring may ignore some groups of force spectra due to subjectivity. In order to prevent such subjective biases from interfering with the outcomes, automated collection and ssaving techniques are usually used. Secondly, the curves are classified and thridly they are aligned. The final step entails the assessment of the curves and this is done with reference to known polymer chain models. Bosshart et al (2008) describe a high throughput SMFS (HT-SMFS) technique which is an improved version of the conventional method. This HT-SFMS technique is described in the next section.
The High-Throughput Single-Molecule Force Spectroscopy (HT-SMFS) Method
First described by Bosshart et al (2008), this technique is used to study the mechanical features, unfolding pathways, energy landscapes and intramolecular and intermolecular forces of membrane proteins. To validate the utility of the technique, Bosshart et al (2008) utilized the proton pump of bacteriorhodopsin (BR) derived from the Haliobacterium salinarum and the L-arginineagmatine antiporter AdiC derived from E. Coli. The former was used as a control while the latter was used as the test substance. For AdiC, 2 types of forces were measuerd and these forces corresponded to detachment from the N terminal and detachment from the C terminal respectively. Since 2 different recombinant AdiC forms were utilized, it was possible to assign the forces to their corresponding termini. Since the N terminus is supposed to be nearly 2.6 times longer than the C terminus and the pH favoured the attachment of the postive N terminal to the engative silicon nitride tip, the possibility of the N terminal attaching to the cantilever is higher than that of the C terminus (Bosshart et al, 2008 p. 214).
Interactions between AdiC and its substrates was assessed by logging in data sets in the presence and absence of agmatine, L-arginine, and D-arginine. The net result was that  6 data sets were generated from nearly 400,000 F-D curves. The bacteriorhodopsin control was the exception as all the other recordings had 200 spectra while it had nearly 400 curves (Bosshart et al, 2008).
The experiment was done using a commercial AFM instrument from JPK Industries and silicon nitride cantilevers. For the cantilevers, the thermal noise method was used to ascertain the spring constant. Adsorption of the bacteriorhodopsin and AdiC particles was follwed by several rinses to get rid of the unattached particles and incubation in specially prepared buffer solutions. The buffer solutions were made up of 150 mM KC1,20 mM Tris-HCl pH 7.8 (BR) and 150 mM NaCl, 20 mM citric acid pH 5.0 (AdiC). The contact mode AFM was used to localize the AdiC-containing proteoliposomes. The topmost layers of collapsed esicles were removed using the AFM tip as neeed arose and spectroscopy performed using the NanoWizard II Ultra AFM instrument  (Bosshart et al, 2008 p.210).
A 150-300mm long point grid was layered on 2D bacteriorhodopsin or on tightly packed AdiC membranes. The density of the grid used was 0.125nm-1. A total of 10 consecutive measurements were done for every grid point and the AFM tip joined to the membrane proteins. The joining was accomplished by exerting a force of 1 nN for a time of 0.1-0.6 seconds on the cantilever 10 times and directed towards the membrane at each point of the grid. Retraction of the cantilever then followed and this was done with a velocity of 0.53 ms-1 for 0.25 seconds. Thereafter, the F-D curves that comprised of 4096 data points were saved and coarse filtering and data analysis carried out. Analysis of the curves was done using the retraction data only. During filtering, the positive spectral forces were indicative of pulling while negative ones were indicative of pushing. The main aim of filtering, as implied before, was to identify the unfolding events. A computer with a processing speed of 2.16GhZ and a random access memory (RAM) of 2 GB was used to filter the data using the IGOR Pro software (Bosshart et al, 2008 p.210).
Negative forces were ignored during the analysis of the data. To get rid of curves which had indefinite unfolding patterns and which had escaped the filtering process, Basset et al (2008) further carried out manual fine filtering followed by classification. This process was done with reference to the definitive AdiC patterns. Collection of data was not stopped until recordings of H 200 unfolding patterns of AdiC that could be classified were obtained (Bosshart et al, 2008 p.211).
As the results show, the N-Hisg-AdiC forces had no significant effect on the presence or absence of L arginine, agmatine and D arginine. With respect to the bacteriorhodopsin and AdiC antiporter, the obtained results are similar to the earlier outcomes reported by earlier investigators (Janovjak et al, 2003 Oesterhelt et al, 2003 Mueller et al, 2003 and Janovjak et al, 2004). This confirms that the technique is highly reliable.
A Programmable Force Sensor Technique
The programmable force sensor method of SMFS was first described by Albrecht et al (2003). Unlike HT-SFMS, this technique utilizes a differential format for measuring the forces inherent in the linkages of single molecules. Here, comparisons between the rupture force of the bond in the molecule of interest and that of the bond of a known reference molecule (the standard) are made. Another modification of this particular technique is that the cantilever spring is not used. In its place, the authors used a polymeric anchor and a reference molecule whose bond was known and which was attached to a fluorescent tag. This contrasts with the HT-SMFS method which utilizes displacement versus spring constants to meaure the forces (Albrecht et al, 2003 p. 367).
In their experiment, Albrecht et al (2003 p. 367) used a 20 bp DNA duplex as the test molecule and a 25 bp DNA duplex as the reference molecule. The polymeric anchor used was a 65 bp oligonucleotide while Cy5 was used to tag the reference molecule. The 2 surfaces were detached by stretching the polymeric anchor. Thereafter, the weakest of the bonds between the test molecule and the reference standard was ruptured by the gradual application of force. Breakage of the symmetry was as a result of the differences between the stability of the bonds (Albrecht et al, 2003.p.368).
Discussion
As Bosshart et al (2008) show, the HT-SMFS is a faster technique which requires less time. In their experiments, entire data sets were collected in only a single day and this is a big improvement in terms of time utilization over the non-automated SMFS procedure. Another advantage of the HT-SMFS technique is that it is highly objective. The objectivity of the method is qualified by the observation that subjective biases are eliminated through the automation of the data collection and pre-processing stages. Additionally, the HT-SMFS is highly versatile in that it can be used with both crystalline and non-crystalline samples to produce reliable outcomes. The versatility of the technique is further enhanced by the observation by Bosshart et al (2008) to the effect that HT-SMFS can also be used to study cloned proteins, native and reconstituted membrane proteins and is adaptable to dynamic studies that investigate the enrgy landscape of such proteins.
Besides being verstaile, HT-SMFS is highly accurate. As stated earlier, Bosshart et al (2008) were able to validate the accuracy of this method using bacteriorhodopsin. The outcomes obtained conformed to earlier findings by Janovjak et al (2003), Oesterhelt et al (2000), Mueller et al (2002) and Janovjak et al (2004). This shows that the technique is highly accurate and is in contrast to the the conventional SMFS technique which has limited capability to resolve the forces. In addition to providing information about the topology of the molecules under study, HT-SMFS can also be used to reveal data about the mechanical stability of structural components.
Yet another advantage of this technique is that it allows all the F-D curves to be recorded and condenses the huge amount of data to a smaller number of force spectra. As a result, it is possible to carry out manual analysis of the data. Moreover, the HT-SMFS method is less laborious since it is semi-automated. According to Bosshart et al (2008), the technique requires that the operator be present for approximately 10 of the entire duration of data acquisition. This time is significantly less than the time required for the conventional SMFS methods  (Bosshart et al, 2008).
The HT-SMFS also overcomes the problem of low efficiency that is inherent in the conventional SMFS method. The conventional SMFS method cannot be used for large scale projects since the method has a low efficiency. This is because many F-D curves do not necessarily result in an unfolding event. This finding was confirmed by Kedrov et al (2004) who observed absolute protein unfolding in just about 3 of instances using bacterial sodiumproton antiporter. In contrast, the HT-SMFS is highly efficient. As described by Bosshart et al (2008), the HT-SMFS method allowed the collection of up to 40,000 F-D curves daily and this yield is much higher than that of the conventional SMFS method  (Bosshart et al, 2008).
In the conventional SMFS method, outcomes are encumbered by factors which have an impact on unfolding such as temperature and ions and this calls for the collection of large data sets hence the need for high throughput protocols or HT-SMFS (Jonovjak et al, 2003 Mueller et al, 2006).  Evidently, results from HT-SMFS are not affected by such considerations. Wheras there is a high possibility of experimental errors interfering with the outcomes due to continuous drifts of calibration measures in the conventional SMFS, HT-SMFS is, by and large, devoid of these errors  (Bosshart et al, 2008).
However, HT-SMFS does not completely overcome the weaknesses of the conventional SMFS technique. For one, complete automation is not possible. This is because AFM imaging must first be carried out prior to the SMFS procedures. Secondly, despite progress being made towards the formulation of pattern recognition and automated alignment algorithms, the collection and filtering of data remains sub optimal (Kuhn et al, 2005 Marsico et al, 2007 Dietz  Rife, 2007). As Bosshart et al (2008) assert, one F-D curve requires nearly 1 second to be recorded. Further, the manual logging in of an entire data set, which has about 200 spectra and where an efficiency of 1 is sought, is a burdensome affair. Additionally, the operator performs online filtering during the logging in step and this has been shown to cause a loss in force spectra thus resulting in a decline in the efficiency of data collection. The final drawback of the HT-SMFS is that it requires that the AFM be conducted for prolonged durations. J
Even though the HT-SMFS is seen to have superior characteristsics, the force sensor method also has a number of advantages which make it ideal than the conventional SMFS technique. First, the force sensor method is characterized by a relatively high symmetry. The net result is that there is a cencelling out of nearly all outside influences. Secondly, the force sensor method has a high precision. A third advantage of the force sensor technique is that it enables the identification of single base pair mismatches in nucleotides,  something which cannot be identified by conventional SMFS. Fourthly, the force sensor method is a more useful technique in certain applications where the differences between the test and references as opposed to absolut values is desirable. Fifthly, it allows many different measurements to be performed concurrently (Albrecht et al, 2003).
Importantly also, the force sensor method can be used to reliably discriminate between specific and non-specific protein interactions. Therefore, this technique is of high value in studies of protein arrays. Since the method discriminates between the different bindings and is characterized by mechanical stringency, it can be used in capture arrays to eliminate background noise and reduce the number of false positives thus enhancing the multiplexing capacities of the protein capture arrays. In sum, the force sensor technique enhances the specificity of protein biochips by eliminating cross-reactions and non-specific interactions. Other advantages of the force sensor method are that it is highly sensitive by virtuie of its high discriminatory power and it can be applied in highly specific parallel assays. The method can also be used to carry out highly accurate assays of single nucleotide polymorphisms (SNPs) (Albrecht et al, 2003). Unlike the conventional SMFS method, it is largely unaffected by prevailing environmental conditions such as ions and temperataure (Albrecht, 2003 p. 370).
The force sensor technique has several drawbacks though which make it a less ideal technique than the HT-SMFS. It largely fails to overcome most of the weaknesses of the conventional SMFS method highlighted earlier. Specifically, the force sensor technique is more labourious and time-consuming and may be affected by global conditions. In particular, these conditions necessitate the use of many controls. Besides, it is not possible to directly quantify the density of the fluorescent tag due to dissimilarity of the chemical and optical characteristics of the surfaces in use. This reduces the efficiency and yield (Albrecht et al, 2003).
Conclusion
Single-molecule force spectroscopy (SMFS) meaurements are indispensable tools in the study of membrane proteins. Whereas both the HT-SMFS and force sensor techniques have vastly improved the utility of the method, the former is seen to be better suited for such studies as it is less laborious, less time consuming, highly efficient, highly versatile, semi-automated and less subjective.

What is a Volcano.

The word volcano refers to a place (fissure or vent) where molten rock (magma), gasses or other debris erupts from the interior of the earth. The origin of the word volcano is associated with a Roman god of fire Vulcan.  According to Roman mythology, he was thought to carry his iron smelting activities on an active volcano named Vulcano at Lipari Island in Italy (Upton, 1565).
A progressive study of volcanoes, how they are formed and diversity of their formations has helped many scholars and other populations at large understand them better. Most of volcanic activities have been associated with destruction but most of spectacular and beautiful landforms have been as a result of volcanic activities (Williams, 14).
There are three elements involved in a volcanic system i.e.
A vent which is a point through which the volcanic material erupts to form a much or less symmetrical cone shaped structure called edifice.
The conduit which is the passage way of the magma to the surface. This is connected to the magma chamber. Due to instability of some eruptions, there develops some fractures from sides of the cone towards the central vent. These fractures act as passages of flank eruptions on the sides of edifice to form parasitic cones. At some times volcanic gases escapes through these vents hence earning the name of fumaroles.
A reservoir which holds the magma deep in the crust of the earth.
Volcanoes are formed through a process called volcanism. Underneath the earths crust, there are molten rocks called magma. For an eruption to occur, there must be a change in pressure at the reservoir so as to overcome the resistance pressure produced by the earth crust. Such changes are brought up by activities like earthquakes, plate tectonics, melting of glacier, changes in tides and rainfall (Williams, 19).  On reaching the earths surface, this liquid magma is called lava.  Other solid materials consisting of rocks, ash, cinders which are fragments of lava and light- weight rocks called pumice which resulting from explosive volcanic eruptions are known as tephra.
Volcanic activities are not limited to earths surface as they do also occur on other planets. Undersea volcanic activity produces landforms known as Submarine or Underwater volcanoes.
How it occurs.
 The origin of a volcano is when molten rock material also known as magma and associated gases finds their way into the surface of the earth. Rocks beneath the earths crust are held under great pressure and very high temperatures hence existing in a semisolid state and at a slightest opportunity the magma spews out to balance the pressure.
Types
Volcanoes exist in three stages in their life cycle
Active stage, which is an early stage and eruptions are frequent.
Dormant stage, when the eruption becomes less frequent (sleeping stage).
Extinct stage, when there has been no eruptions in n a long historic time. They are however grouped according to the way they appear on the surface of the earth. The resulting mound of volcano is called cone which consists of lava and rocks exuded from the crust by the molten magma (Scarth, 26). Other constituents are ash and cinders (small lava fragments) The prominent shape and size of cone is dictated by the type of eruption and cones material
 There are of three major types which are defined by their form and composition
1. Composite Volcanoes (Stratovolcano)
This appears as a cone shaped mountain whose slopes are smooth, steep and bare.  Active ones have a prominent plume of smoke from one central vent. Lava emitted is usually thick and solidifies fast before flowing for a long distance. At times, alternate layers of lava and ash are formed. Example is Mt. Kilimanjaro in Tanzania and Vesuvius in Italy.  In case of a violent explosion the top part is blown off leaving a huge crater like depressions called a caldera. Other calderas are formed by subsidence when the supply of magma is depleted. This forms a good site for a lake i.e. Crater Lake in USA (Morton, 35). Other examples are Arenal and Irazu volcanoes in Costa Rica.
Shield or Lava cone volcanoes.
This type of volcano is mainly made of fluid lava flow and appears as a large volcanic dome structure with long gentle slopes. At times this kind of volcano can have multiple vents producing lavas flows. A good example of this is Mauna Loa in Hawaii.
3.  Ash Cinder cone Volcanoes.
This type of volcano is produced through accumulation of loose pieces of magma known as scoria that fall and accumulates around the vent after a moderate explosive activity. Examples include Volcano De Fuego in Guatemala and Paricutin in Mexico. Other landforms produced by fissure eruptions that covers a large area and its features are lava plateaus, When water comes into contact with hot rocks underneath on its way out it results to a hot springs and geysers. The main difference between hot spring and a geyser is the pressure in which hot water exits the ground (Chester, 1567). For a geyser, water is ejected explosively. Hot springs are common in Iceland and geysers are found along the floor of the Great Rift Valley in East Africa.
Although many volcanoes are known to start on earth, there are many more volcanoes that start at the bottom of the ocean. These are known as Submarine or Underwater volcanoes. Most of these eruptions occur along the plate tectonic lines where plate movements are common. This is however not limited to the deep waters as it can occur on shallow waters ejecting magma and other contents on surrounding land (Upton, 76). There are about 5,000 active underwater volcanoes. Every year, underwater volcanoes accounts for more than 75 of the total lava that erupts. The eruption of a submarine volcano is very different from surface volcano due to the unlimited supply of water to cool the hot lava.
Upon eruption, a shell of lava solidifies immediately around the lava forming what is known as pillow lava. These volcanoes are responsible for formation of islands i.e. Hawaiian islands. There has been a great handicap in study of submarine volcanoes due to the environment which they occur in. It is worth noting at this point that the biggest submarine volcano is in Hawaii (Scarth, 92). It is also actually taller than the tallest mountain on the earth surface, Mt Everest if measured from the sea floor.
In general therefore, Volcanoes are caused by pressure that builds below the earth crust and it is forced to erupt by magma through weak points of the earth surface. When the magma inform of lava cools it forms rocks. The volcanoes are very active and it is very dangerous for lives of any living organism. If there is likelihood of volcano formation then the area of formation should be cleared completely so that there are no cases of injuries or death. Mountainous areas are the most areas that are affected by volcanoes and people should take care when living in such areas. The topic of volcanoes has been covered comprehensively by the author.

Travel through Time.

This paper focuses on a work of art made by Claude Lorrain which is entitled Pastoral Landscape. This piece of art shows a picture of scenery from the past which features two people in the foreground who seem to be talking, animals grazing on grassland, beautiful large trees, a bridge built over a winding river and even a castle at the background. According to the Yale University Art Gallery, his painting was custom-made in 1648 by Hans Georg Werdmller, a Swiss military engineer, who probably made it possible it through a mediator. It portrays the calmness of rural living, a lifestyle that is slowly being forgotten, nowadays, in this extremely modernized world. This painting embodies the clean, pollution-less and ideal environment of the 17th century. This oil painting may also serve as a recollection of the old flora and fauna of the earth. At the present time, when the world is continuously changing, this piece of art can serve as a peek to the old earth, peaceful and sterile. Though, of course, this scene may still be seen in some parts of the world, Pastoral Landscape may offer practical and educational use for the youth of today. It is a record of a once scenery in a far-fetched land. Looking upon the Pastoral Landscapes existence may be like looking at the sky filled with stars, a travel through time.
If one shall analyze the piece of art part by part, one shall witness its characteristic life in stillness. Ironic as it may be, in the motionlessness of the painting, we shall see life. Life is present in every member of the scene portrayed by Lorrain through his work.  The painting exhibits living creatures the two men, four cows, and plants of various kinds. Life is not limited among these, if we see through things thoroughly, even the river, the soil, and the atmosphere contains life. Life may not have been clearly portrayed among those areas due to minuteness in size but life is definitely there. In here, we shall see the oneness of science and art. Let us define science science is a means to acquire knowledge. (Merriam-Webster Online Dictionary, 2009) Personally, if asked to define science I will say that science is everything. As it is the wind that blows through the river, the worm crawling in the soil, and the bee pollinating the plants it is every little thing in this work of art. Why This is because everything in this world is a means to gain knowledge. Everything is subject to science. On the other hand, if we define art, things become complicated. The definition of art is debatable but considering one definition that is well known, then, we can say that art is a means of expression of thoughts and emotions. (Levinson, 2003, p. 5) In my opinion, if asked to define art I would say that art is something that is a fountain of beauty. In view of the Pastoral Landscape, every element in it is science and art at the same time. Every element of the painting is science since they possess life, then, making them subject to biology, and simultaneously every element in it is art since they stir emotions and are fountains of beauty.  Biology and art as one, this is Pastoral Landscape. In viewing  the art piece, one can associate the existence of life with beauty. All of creation was made beautiful, and in this piece of art, Lorrain put nature, life-harboring as it is, on the spotlight.
    Here is the scene in the Pastoral Landscape as I depict it. As the sun rises, it bathes the plants with its rays allowing these plants to photosynthesize and making them able to feed themselves to survive. These plants live and are, themselves, viable food source for animals around them.  Animals, then, feed on the plants and provide themselves nutrition to sustain their bodily processes, also, for some make themselves recover from injuries. These animals can be found in the wild or in pastures taken care of by people, who, in turn, make a living through shepherding. As shepherds, these men care for the animals and, through time, use them to earn money by selling their milk, skin or even the whole animal for human consumption. Men age and deteriorate, as they grow older, they continue to consume animals for food but, eventually, reach the end of their lives and die. When people die, they are buried under the ground wherein they decompose through microorganisms and other small animals. The nutrients in the bodies of the dead people go back to the soil and, then, absorbed by plants for nourishment. In here goes one complete food chain. The elements of the food chain are the same elements present in Lorrains work of art.  In here, once again, we witness the inevitable connection of biology and art. The picture produced by Lorrain is an embodiment of a natural phenomenon studied in biology to further understand nature and its inhabitants.
Pieces of arts continually deteriorate through time. For arts sake, this is definitely not good. Arts main preoccupation is the production and preservation of information. Biology affects artifacts original state. (Wilson, 2002) This is through the presence of microorganisms in the atmosphere that can destroy masterpieces of art. Art and biology, once more, cross paths. With this at hand, we may say that biology is in debt of arts. If we observe the artwork of Lorrain, and many other art pieces, we may say that the physical qualities of the oil painting are not at its best. This , of course, is a natural occurrence and is caused by biological activities in the environment. There is one more inevitable link of science and art. If a piece of art, which is meant to store information gets damaged by nature, then, the purpose of an art shall be lost and the correlation of biology and art will be a failure.
During the time of Lorrain, landscape art, if I may call it that way, was not yet popular. He essentially made one whole new field in the arts. Through his artwork, we shall witness his innate creativity.  Innovation, which is witnessed through the formation of art, is a biological process relying upon an intricacy of nerve circuitry and neurotransmitter release that to understand an arts modernism will be different with the manner by which we interpret the arts  creation. (Wilson E. O., 1998, p. 236) Pastoral Landscape is a beauty that would pass by your eyes and forget after. Unlike other masterpieces such as the Mona Lisa, Lorrains style of work has been imitated throughout the following centuries after 1648 that this type of piece is most of the time like dj vu for people. In here, we see that the manner by which a person matures can dictate his view of art. Art is, indeed, relative.  In my case, I grew up to have seen so many of those types of art pieces that seeing another would have not amazed me that much. This reaction is essentially dictated by epigenetic factors, factors that are not controlled by genetic characteristics, in man. Even the most widely known works of art might be understood basically with knowledge of the biologically evolved epigenetic rules that steered them. (Wilson E. O., 1998, p. 233) The concept of beauty of a work of art is, then, dependent on a persons background.
Arts and science go hand in hand. They need each other deeply for the arts needs the innovations of science and science, on the other hand, needs the freedom of the arts. Together, they are linked by logical interpretations of a piece of work, may it be from the artistic or scientific side. (Wilson E. O., 1998, p. 230) Pastoral Landscape, as an artwork, has served its pupose of storing historical information by portraying an old landscape, and through this purpose, it can be a tool to learn more about the past through innovations of the present time.

a sustainable home proposal.

If my parents are going to build a second home, I would suggest building one in a mountainous region, preferably on a mountain that has relatively lesser trees and stronger winds. Provided that there are no budget and resource constraints, I would like to propose some ideas to make the house as sustainable as possible.
I would propose building a solar panel on the roof of the house, as well as a windmill nearby. Both wind and solar energy are known for their variability, and so constructing both a solar panel and a windmill would guarantee an almost constant supply of power. Implementing these sustainable sources of energy would cause lesser pollution by lessening the energy consumption from power plants generating electricity through the utilization of fossil fuels. 
For exterior construction materials, I would recommend the use of sustainable materials such as Hemcrete and Papercrete. These are concrete made from paper and other fibrous materials, and would help reduce un-recycled textile and fiber waste. I would also encourage the use of fly ash to serve as additive to cement. This would help recycle fly ash, as well as lessen global energy consumption for mining and heating limestone in order to produce cement. Moreover, I would suggest reusing unneeded construction materials should it be possible to procure them. Reusing construction materials is both inexpensive and sustainable.
I would advice the use of X-board for both indoor construction and furniture. X-board is a construction material made from recycled honeycomb. Unlike medium density fiberboards or plywood, X-boards primary material is not derived from wood, and is not processed using formaldehyde, thus lessening indoor pollution and reducing our dependence on trees (Grigsby 2009).
Household wastes should be addressed accordingly. Biodegradable solid waste should be put on a compost pit to serve as fertilizer to a nearby home garden. On the other hand, liquid sewage waste would be subjected to an aeration system so it can be treated and be used to water plants. On the other hand, non-biodegradable wastes would have to be segregated and delivered to the appropriate recycling facility. Implementing these would reduce environmental impact and minimize the carbon footprint of household products used.

CAUSES OF GREENHOUSE EFFECT
The following are some human activities that contribute to the concentration levels of atmospheric greenhouse gases.

Utilization of Fossil Fuels
Use of Aerosols and CFCs
Industrial and Electrical Generation
Agricultural Processes
Deforestation and Clearing of Land

The first four causes mentioned results to an increased concentration of greenhouse gases, primarily due to their atmospheric emissions. On the other hand, the deforestation and clearing of land indirectly affect the concentration of greenhouse gas by disrupting the role of plants in the oxygen cycle.

OUR MORAL IMPERATIVE ACCORDING TO AL GORE
Gores documentary titled An Inconvenient Truth demonstrated that some human activities are responsible for the increasing levels of concentration of greenhouse gases in the atmosphere, and thus causing global catastrophes such as rising sea levels, global warming, and climate change. It urges its viewers and readers that stopping global warming is our moral obligation. However, addressing this concern would impose grave restrictions to industrial development among nations and business organizations. Therefore, the said documentary appealed that although the fight against global warming may be inconvenient to most people, it urges everyone to participate because it is a moral imperative. It persuades every reader and viewer to do the right thing stop global warming and conserve the environment.

THE INCINERATION OF MUNICIPAL WASTE
Incineration is a waste management method that involves the combustion of waste. An incineration plant uses a combustion chamber in order to burn solid waste. The following are the advantages and disadvantages of the use of fossil fuels

Advantages
Incineration Plants Require Less Land. Compared to other waste management methods and facilities, an incineration plant needs relatively less land to operate.

Unaffected by Weather Conditions. Other waste treatment methods such cpen dumps and ocean dumping cannot operate effectively during a harsh weather. On the other hand, an incineration plant can operate smoothly regardless of the weather.

Odor-Free Residue. Unlike Other treatment methods, incineration would reduce solid waste to ashes, leaving an odor-free residue. This residue can then be disposed using the other waste management methods.

Disadvantages
Cost. The construction and maintenance of an incineration plant are expensive.

Requires skilled workers to operate. Unlike other systems, an incineration plant requires skilled personnel to facilitate the disposal of the waste from combustion to disposal. Incineration is a hazardous procedure and also requires strict and steady maintenance.

Emissions. An incineration plant uses combustion to burn municipal waste, thus resulting to atmospheric emissions. These emissions may cause other environmental problems. Due to this, such plants need to be situated in less populated areas.

Use of Monoclonal Antibodies in the Prognosis and Treatment of Head and Neck Cancer.

Monoclonal antibodies, usually abbreviated as mAb, have been known to be very important reagents that can be effectively used for biomedical researches. Upon their use in such researches, they can be effectively developed to diagnose a number of human diseases, treat the diagnosed diseases, and these may vary from cancers and other contagious infections. These monoclonal antibodies have been produced successfully from cell lines and clones which have been obtained from some different animals such as rabbits and rats. Such animals have to be first immunized by the substance being studied. Once such an immunization has been done, the clones or cell lines will be produced from the fusion of B-cells produced once the animal has been immunized with any form of myeloma cells.

In order for the desirable mAb to be produced by the animal, the cells should be grown through two major ways one, we can have an injection on the peritoneal cavities of a mouse, or, we can use an in-vitro culture, also known as tissue culture. After that, the mouse if further processed by having its ascitic fluid mixed in a supernatant so that the obtained mAb can be of the desired concentration and purity. Having understood that, human beings and mice possess the ability to make or produce antibodies that can bind and recognize any form of antigenic epitope determinant. They can also be used to discriminate any two similar epitopes. This nature provides the key basis on which the protection to some kinds of disease organisms can be achieved, and also making such antibodies equal opportunity candidates that can be used in targeting other kinds of molecules occurring in the human body. Some of these molecules will range from protein receptors found on the surfaces of normal body cells. Others are the cancer cells which have been posing the greatest threat to human health today.

This ability and specificity of the antibodies to attacking cancer cells has made them promising capable agents for providing therapy. Such therapies would include the use of antibodies that can bind to specific targeted cancer cells within the body of a patient. There can also be the coupling of cytotoxic agents such as isotopes, which are usually radio-active. This can also be achieved through giving complex antibody derivatives to a patient so that the cancer cells can be targeted. This specificity ensures that the other normal body cells are not interfered with.

This fact has hence seen monoclonal antibodies being widely used in the diagnosing disease and infections, and also in manufacturing research reagents for future studies. For proper monoclonal production and effectiveness, a number of Biochemists have been coupling such monoclonal antibodies to some other molecules such as fluorescent molecules which are useful for their aid when it comes to the target imaging. Such antibodies have also been coupled with strong radioactive atoms that will aid in the destruction of the targeted cells. Such radioactive compounds may include Iodine-131.

Monoclonal Antibodies in the Prognosis and Treatment of Head and Neck Cancer

Human kind is today facing a number of threats from major diseases, infections and cancer cells. For instance, the head and neck cancer is one of such cancers which have been worrying in searching for better treatments processes. Neck and Head cancer will be used to refer to a group of cancers which are biologically similar and originating somewhere within the upper parts of the aero-digestive tract. This will include the lips, the mouth, also called the oral cavity, the nasal passage, larynx and even the pharynx. Most of these head and neck cancers have been known as Squamous Cell Carcinomas, SCCHN, which have originated from epithelium or the lining of the mucosal.

From the major biochemical studies that have been done over the last few years, the head and neck cancer has been noted to spread from the lymph nodes in the neck. This is usually the first sign or manifestation that the disease has been fully developed in the body. Experts believe that this cancer is greatly associated with some certain kinds of environmental, the lifestyle-led factors that may be risky such as heavy-tobacco smoking, excessive intake and abuse of alcohol, some exposure to Ultra-Violet rays, and some viral strains, some of which are sexually transmitted like the papillomavirus in human beings.

Most of these cancers have been known to be aggressive in terms of their biological expressions and behaviours. This is the reason why patients of such cancers will often develop some follow-up primary tumours. The issue of cure and treatment has been scrutinized by doctors and biochemists over the years. Reports have shown that these cancers are curable, and this will be upon early detection. The treatment will be given in form of a technical surgery known as Chemotherapy. Also, Radiation Therapy has also been known to play a key role in treating these cancers. There have been the increasing issues of monoclonal antibodies and whether they can be effective in dealing with such cancers.

For instance, the Medical Department of Nebraska University announced to have successfully demonstrated some key safe and feasible human-derived monoclonal antibodies which had been based on a product known as HumaRAD. Such a combination had given positive outcome in its ability to treat head and neck cancer. From the research that was carried out, it was suggested that HumaRAD could successfully give the necessary therapeutic dosages in radiation which would directly result in treating such cancer. Being a form of monoclonal antibody derivative, HumaRAD has been designed in such a way that it can deliver radiation in some dosages that are a little bit higher as compared to the current external radiation. Such an internal radiation delivered by HumaRAD has been noted not to cause any toxicity within the body system. The present monoclonal antibody technology has been able to come up with some techniques through which such antibodies can be used in the production of drugs that can fight head and neck cancer.
When a given monoclonal antibody has been attached to a given cancer cell within the infected body part, it is supposed to function in a number of ways in eliminating the cancer cells.
The antibodies will make the head and neck cancer cells to become visible, and this makes the immune system capable to access the cells. This way, the monoclonal antibodies have been useful in effecting cancer elimination. Once that has been achieved, the immune system will then be able to attack all foreign invading cells within the body system. The problem here is that some cancer cells may be detected by the immunity system as normal cells and hence they may not be cleaned up. This way, any monoclonal antibody should be directed in such a way that it will attach itself to a given part of the cancer cells. This makes the antibody mark such cancer cells, thereby making the immune system able to recognize such cells. This has been one of the basics behind the application of monoclonal antibodies in head and neck cancer treatment.
For instance, the use of monoclonal antibody drugs such as Rituximab, also called Rituxan, helps in the attachment to some given proteins, like the CD20, which are mainly found within the surfaces of B cells. B cells form one of the types of white blood cells (). Once that has been achieved, certain kinds of lymphomas then arise from such B cells within the infected cancer region. Once Rituximab has attached to any of such proteins on the surfaces of the B cells, it will make the cell visible, and therefore the immune system will then be able to attack. Rituximab has hence been useful in lowering the quantity of the B cells, thereby resulting in a breed of healthy cells. This way the new breed of B cells are non-cancerous.                     
The other way through which monoclonal antibodies can be useful in the control and treatment of head and neck cancer is by blocking the growth signals within such cells. This is because of some chemicals known as growth factors which attach themselves to the receptors within or on the surfaces of the neck and head normal cells and the cancer cells. This then signals the healthy cells to have dominion in their growth rates. The problem that has been noted during such sequencing is that some malignant cancer cells will tend to make more copies once such a growth receptor has been provided. Once they achieve a faster rate of growth than the other cells, what happens is that the monoclonal antibodies may block such growth receptors hence preventing any farther growth. The best drug used here is Cetuximab, or Erbitux, which is a monoclonal antibody drug approved in the treatment of head and neck cancers, among others. This will attach to the receptors within and on the cancer cells so that they wont accept any growth signal from the growth factors. One thing that should be noted is that cancer cells in the body tend to be healthy, and this aspect ensures that such a drug will differentiate between the two. This ensures that such cells can be blocked from the signals hence killing the cancer cells, or inhibiting their growth.       
The other factor that has been utilized when using monoclonal antibodies in the treatment of head and neck cancer is through the deliverance of radiations on such cancer cells. This is achieved by the combination of radioactive elements or particles which are attached to a specific monoclonal antibody. Doctors have been able to deliver such radiations directly on the specific cancer cells. Such radiations should be attached to such monoclonal antibodies. Once this has been done properly, the healthy cells will not be damaged as such by the operation of the radiation. With these neck and head cancers, such radiation-derived monoclonal antibodies should be able to deliver very low radiations within long time durations.                  
The other way of treating such cancer cells has been through the application of powerful antibody-produced drugs within such cancer cells. Such anti-cancer medications are usually powerful and in form of toxins and will have to be attached to a given monoclonal antibody for effectiveness. Such drugs will remain a little bit inactive within the target cells and this ensures that they can lower the chances through which they can harm the other healthy cells. For instance, the best drug here is Mylotarg, which has been used for such cancers such as head and neck cancer, leukemia and bacterial attack. The monoclonal antibody attached to this compound, Gemtuzumab, is capable of attaching to some given receptors. After that, the drug will enter into the cancer cells causing a permanent death on them.
Considerations when Deciding on Monoclonal Antibody Drug Treatment
There are a number of issues that should be put into consideration when a patient is to be exposed to any antibody-derived drug in the prognosis and treatment of head and neck cancer. For instance, it would be important to have a discussion with the doctor depending on the available options for the management of such cancer. This is important because both the patient and the doctor can effectively weigh some of the risks and benefits with each sort of treatment and this may include an analysis of the monoclonal drug and the bodys tolerance to it. There are therefore a number of questions a patient should seek clarification on before attempting any drug. First, it would be necessary to have some of the risks and benefits of such a drug known so that the patient is not harmed at the very end. This is so because some of the monoclonal drugs have been approved to be used when the cancer is in its advanced stages.
Another important thing is for the patient to understand some of the side-effects upon the use of the drug. The doctor will therefore be obligated to determine the right drug and the right dosage for that particular patient. It would also be necessary that the patient is made aware of the cost of the drugs before starting the dosages. This is because such drugs are expensive and there may be some financial constraints later in life hence distorting the cancer healing process. The other important thing of consideration will be whether such a drug can be available in all clinics and ready for clinical trials. Clinical trials are studies aimed in monitoring the major new drug dispensations and response. All these measures will ensure that we have necessary prognosis and treatment of head and neck cancer.
Once such monoclonal antibodies are to be used in the prognosis and treatment of head and neck cancer, it will be noted that they are effective and they will bring about the best results. For instance, there is an approximate 5-year survival in all the developmental stages of head and neck cancers, and this is about 35 percent to 50 percent. Basically such a prognostic period is caused by the late presentation of such cancers. This hence puts monoclonal antibodies as the best tools that would be effective in the fight against all head and neck cancers, among others. Generally, there are a varied number of such monoclonal antibody derivatives and drugs that have been developed and made available in the treatment of different types of human cancer. There have also been increased clinical trials with the aim of studying such monoclonal antibodies and such derived drugs to see how they can be effective in the treatment in nearly any kind of cancer affecting human beings today.

Hektoen Agar

1. Hektoen Agar is a selective medium, It is also a defined on an undefined medium. Why is that formulation desirable
Ans. Hektoen Agar is a selective medium that helps in detecting and isolating only the Salmonella and Shigella organisms from other enteric bacteria. Since it can also isolate the growth of salmonella from shigella sps., it can be classified under differentiating media. The bile salts present in the composition of the medium inhibit the growth of gram-positive organisms in the culture and allow only gram negative bacteria to grow. Sucrose, salicin and lactose are added into Hektoen Agar in order to allow the growth of pathogens that are lactose fermentors. Ammonium ferric citrate acts as indicator (reacts with substrate to give rise to a color for visualisation) and Sodium thio-sulfate reacts with hydrogen sulfide (released by salmonella Sps. but not by schigella Sps.) and helps in formation of black precipitate with iron (II) present in ferric citrate. Thus black-centered salmonella colonies represent precipitated black ferrous ammonium citrate while schigella appear as bluish green colonies. The indicators consisting of acid fuchsin and bromothymol blue help in improved recovery of gram negative bacteria. So, the composition of Hektoen Agar is peptone or yeast extract, bile salts, sucrose, lactose, salicin, ferric citrate, sodium chloride, sodium thio-sulfate, acid fuchsin, agar and bromothymol blue (qtd. in King and Metzger 77).

2. Hektoen Agar was designed to differentiate salmonella and shigella from other enteric. Salmonella species sometimes produce black precipitate in their growth shigella species do not. If you were designing a medium to differentiate these two genera, which ingredients from this medium would you include Explain
Ans. Shigella species show negative result for motility test and they cannot ferment lactose in the medium and hence do not produce gas (fermentation leads to gas production). Most of the Salmonella species are lactose fermentors and produce gas. Addition of ammonium ferric citrate and sodium thio-sulfate to the newly designed medium will differentiate Salmonella Sps. from Schigella Sps. as Salmonella appear in black centered colonies due to hydrogen sulfide production and reaction of it with iron, while Schigella appear in bluish green colonies. Ammonium ferric citrate which is a source of iron is used as indicator for hydrogen sulfide (H2S) produced by Salmonella Sps. and sodium thio-sulfate is used as the substrate for building hydrogen sulphide which gives together with iron a black precipitate (Iron(II) sulphide FeS). H2S-positive colonies (salmonella sps.) have black centers and bluish green periphery. So, if there is release of hydrogen sulfide then black centered colonies appear while shigella do not release hydrogen sulfide hence appear in only bluish green (qtd. in King and Metzger 77).

3. All enteric ferment glucose. What would be some consequences of replacing the sugars in Hektoen agar medium with glucose. What color combinations would you expect to see
Ans.  Fermentation of glucose gives pink-red color by all other enterobacteriaceae members as they are lactose fermentors. Salmonella Sps. as usual show black centered colonies while Schigella show bluish green colonies as usual as it cannot ferment lactose. Colorless colonies will be shown by those bacteria like pseudomonas (obligate aerobe) that did not get inhibited in their growth in Hektoen agar.

4. Which ingredient in Hektoen Agar contains Carbon Which Ingredient contains nitrogen
Ans. Mixed peptone and yeast extract are rich sources for nitrogenous compounds, vitamins, carbon, sulphur and amino acids.
MacConkey
With respect to the macConkey Agar, what would be the possible consequence of
A. Replacing the lactose with glucose
Ans. As macConkey agar is inhibitory and differential medium used to distinguish lactose-fermenting Gram-negative organisms from those that cannot ferment lactose. By replacing lactose with glucose this medium may not distinguish lactose fermentors from lactose non-fermentors. All the enterobacteriaceae members can grow in the presence of glucose.
B. Replacing the neutral red with phenol red (yellow when acidic red or pink when alkaline)
Ans. Crystal violet, bile salts and neutral red are inhibitory agents. Neutral red is the pH indicator. Neutral red exists in red color normally and changes to yellow at pH 6.8-8.0. While phenol red exists in yellow color normally and changes to red at pH 6.8-8.0. So in the presence of lactose fermentors, lactose get converted into acid and hence the colonies turn pink-red with neutral red indicator but turn into yellow with phenol red indicator. In the case of non-lactose fermenting bacteria no acid production is made and hence colonies remain colorless.

History of microbiology.

Antonie van leeuwenhoek, first to observe microorganisms using a microscope. He invented more than 500 simple microscopes. Though they are not that much powerful as compound microscope but they rule the world for more than a decade. Leeuwenhoeks simple microscopes magnified objects to over 200 times actual size, with clearer and brighter images than any of his predecessors had achieved. He was the first person to observe company of living animalcules (bacteria) in tooth plaque. Of course, nobody else knew their significance in causing disease. His other discoveries included algae, blood cells, sperm cells, foraminifera, nematodes and rotifers. He observed blood flow in capillaries and the pattern of muscle fibers. In fact he initiated the field of bacteriology and microbiology without which Edward Jenner has no role in inventing the vaccine for small pox which was caused by variola. Edward Jenner of course a great scientist who saved so many lives by inventing vaccine for small pox but he could not consider as a all-time great microbiologist when compare to Antonie van leeuwenhoek who is considered as father of microbiology.
For finding out the vaccine the disease must be known and the organism must be isolated from a diseased organism and can be able to produce disease symptoms when introduced into a healthy organism. This can be otherwise famously known as Koch Postulates, developed in 19th century as general guidelines to identify  HYPERLINK httpen.wikipedia.orgwikiPathogen o Pathogen pathogens. Attempts made to apply Kochs postulates to the diagnosis of viral diseases at a time when viruses could not be isolated in culture, which is otherwise impossible.
Disease Epidemics
The massive outbreak of plague occurred in 1348-49 which had a long-term impact on socio economic conditions of Europe population. The main symptom of this plague is a swollen, painful lymph gland called a bubo, usually in the groin, armpit, or neck. So this can be otherwise called as bubonic plague. Actually it is a vector born disease, which can spread by fleas to humans from rodents. Millions of people in Europe died from plague in the Middle Ages, when human homes and places of work were inhabited by flea-infested rats. It is a disease of rodents caused by Yersinia pestis . The outbreak of disease occurred due to favorable weather conditions and less life spawn of the vector fleas.  Disease can be controlled by either reducing the flea or by reducing the rodent population. Reduce rodent habitat like rock piles, junk, cluttered firewood etc., which can reduce rodent population.
Some medical historians think that the bacterium became less virulent others that the disease vectors host became increasingly separated from human beings as brick houses replaced timber-framed dwellings others emphasize the importance of quarantine measures in restricting plague in Europe. The United States Centers for Disease Control (CDC) monitor these outbreaks, and make efforts to control the disease in rodent populations where it is still active. Plague still poses a significant threat to human health. The use of antibiotics such as tetracycline and streptomycin for the treatment of plague has been recommended by the WHO Expert Committee, which leads to the development of antibiotic-resistant Y. pestis strains. So several alternatives came into existence like immunotherapy, non-pathogen-specific immunomodulatory therapy, phage therapy, bacteriocin therapy, and treatment with inhibitors of virulence factors.


It is certainly a good idea to choose the formulation containing the inactivated virus and the CpG, because Class B oligodeoxynucleotides will help B cells mature, thereby inducing a humoral immune response this makes CpG a good adjuvant to the vaccine. As for choosing the right delivery system, clinical research has shown that intramuscular administration of flu vaccines increases amount of circulating antibodies and mucosal IgG, while intranasal administration increased amount of mucosal IgA (Stephenson et al.). IgA is an immunoglobulin found mostly in the bodys secretions and plays a large role in mucosal immunity. IgA in the blood causes Antibody-Dependent Cell-Mediated Cytotoxicity, where cells bound by antibodies are targeted for lysing. IgG is an immunoglobulin related to the immune systems secondary response, and protects the body by binding to and immobilizing pathogens. In addition, the study showed that the intranasal administration produced a weaker humoral response than intramuscular administration because the intranasal administration only produced a significant increase in mucosal IgA, while the intramuscular administration produced a significant increase in systemic and mucosal IgG thus I disagree with the suggestion to use intranasal administration. Additionally, use of intranasal vaccinations has been known to cause an array of side effects including vomiting and nasal inflammation.
    Additionally, we would expect that the intramuscular administration of the inactivated virus  CpG would confer the most protection, since it would stimulate systemic antibody and mucosal IgA production while this route doesnt protect much against a variety of stains of flu, it is the most effective at protecting the body against a specific strain of flu. This happens because the IgG produced by intramuscular administration provokes the bodys secondary immune response, involving the production of memory cells and specific antibodies. The specificity of this pathway limits the number of different viral strains it can effectively counter. The IgA produced by nasal administration is effective at producing inflammation and Antibody-Dependent Cell-Mediated Cytotoxicity, targeting cells for destruction. Thus, since it is able to bind to different types of viral strains, it can protect the body from a wider variety of viral strains.
We can determine if the intramuscular inactivated virus  CpG formulations response is protective by measuring baseline and experimental mucosal IgA and IgG through nasal secretions, and by measuring serum hemagglutination inhibition. A viral hemagglutination inhibition assay involves the agglutination of red blood cells by a virus in suspension by diluting the virus suspension and adding it to known amounts of red blood cells, the researcher can estimate the number of virus particles by looking at which dilution inhibited hemagglutination. Measuring hemagglutination inhibition in patients given either nasal administration or viral administration can help the researcher estimate the number of virus particles circulating in their blood or nasal mucous this was done in the study.
    Based on the results of the study, I would recommend giving the inactivated virus  CpG intranasally to the young, healthy, non-pregnant segment of the population because it is more effective on young people that it is on old people and it stimulates an antibody which has been shown to have a stronger immunological memory than IgG and better crossreactivity with different influenza strains however, the intranasal administration is less effective at producing a strong response to a single strain of flu virus. The intramuscular formulation would produce the highest humoral immune response because it would stimulate production of more antibodies by B cells and affect mucosal and systemic immunoglobulin production the intranasal administration only significantly affects mucosal immunoglobulin production.

Personalized medicine

Personalized medicine is considered to be the modification of medical treatment to suit each patients individual characteristics. This does not imply that medical services and drugs are made uniquely for a particular patient but it means that it enables the classification of patients into differing subpopulations that can be identified due to their susceptibility either to a disease or treatment of a certain disease. It has always been physicians goals to adjust treatment and fit it to the specific needs and characteristics of the patients (Great Britain Parliament House of Lords Science and Technology Committee, 2009). Personalized medicine is defined by the congress of America as the application of molecular or genomic data to determine the predisposition of a person to a certain condition, to facilitate invention and clinical testing of the upcoming products and to target a more efficient way pf delivering healthcare (Rothwell, 2007). This paper will address the issue of personalized medicine and its potential impacts on the society.
In 2001, the first publication on human genome was made. Since then, there has been an increased utilization of genetic techniques in the filed of personalized medicine. Personalized medicine may be taken to be the extension of the conventional methods in treating and understanding diseases. However, it goes further than this since it has more accuracy. It involves the use of a patients variation of genes to guide treatment and the drugs that can be used in order to come up with better results. It also uses the same in order to reduce the detrimental side effects. It as well affects a patients dosage for a particular prescription because genes can be used to detect a patients absorption rate. The knowledge of a patients genetic make up can show hisher vulnerability of some conditions that have not yet occurred. This would help the patient and the doctor to come up with prevention and monitoring plans. Generally, personalized medicine implies that a patient can get the right drug which is tailor-made to their condition (Great Britain Parliament House of Lords Science and Technology Committee, 2009).
Impact of personalized medicine to the society
    It can be used to manage chronic disease in an easier way because it gives the ability to profile the expression level of metabolites, proteins, sequence of genes and the structure can help in the classification of diseases and making choice of the most appropriate drugs for the patient. This would allow easier management of complex diseases like Alzheimers, cancer and heart disease among others. These diseases are usually considered to be a healthcare systems burden due to their chronic nature (Hedgecoe, 2004). 
Development of drugs is an expensive and lengthy process. In theory, the use of pharmacogenomic data regarding the genes of the patient has an effect on the responses to drugs and therefore, it can be used to reduce the cost and time in which a certain drug is developed. It is said that it has the possibility of lowering the amount of drug failures since it would enable the researchers to give their attention to the populations of patients where safety or efficacy has a higher chance of being proved. The implication is that researchers can have more targeted clinical trials which are likely to be more successful (Great Britain Parliament House of Lords Science and Technology Committee, 2009). 
    Pharmacogenomics can help to cut on the costs in that they make the length of the clinical trials shorter. It is also said to reduce the rate of drug failure. In general, the implication is that people can now have more effective drugs that take shorter time and lower costs to be tried. Over the years, the drug companies have lost a lot of money due to the abandonment of research projects. They do this because some trials have failed and have produced negative effects. However, personalized medicine could be used because a drug can be associated with genetic tests still at the stage of clinical research. This would help in the identification of the drugs that are likely to work and those that might not work. All these would be done by the use of genetic codes. This could be useful to the drug companies that have back catalogues since these can be taken back for clinical trials whereby a more selective group could be used. This kind of revival may possibly lead to the successful utilization of older products with an identified and specified type of patients. This would save drug companies a lot of money (Hedgecoe, 2004).
    Another impact of personalized medicine is that it is likely to increase the rate at which patients comply. More patients might be willing to take their drugs for certain conditions if they are effective. It will also give patients an assurance, especially if after consuming the drugs they do not suffer from any negative side effects. This means that we would have healthier nation with fit people who are likely to contribute not only to the economy of the state but also to other aspects of their lives. The drug companies are also likely to make more money since patients will buy drugs more readily. This in itself has more implications as such drug companies are likely to expand due to the increase in their sale volumes hence profits. This further implies that there will be more job opportunities, thus reducing the rate of unemployment and the crimes that are related to idleness (Hedgecoe, 2004).
    Personalized medicine affects the insurers in that they are expensive in terms of diagnosis and individualized medication. Majority of the private companies in America do not reimburse for tests that are genetic. This means that the current health care system can not efficiently provide personalized medicine. On the other hand, due to the high expense involved in the tests, many people can not afford to have them. This implies that it is not every one who will benefit from the personalized medicine it will only benefit the few who can afford while the rest will continue to use the conventional methods of treatment (Hedgecoe, 2004). 
    The last implication is that personalized medicine is likely to cause mistrust at the workplace. The persons information which will include the genomes of the said individuals is likely to be known by more people and especially in the companies that do not respect confidentiality. This would come loaded with other problems (Rothwell, 2007). 
    Personalized medicine is likely to face some challenges as more companies practice the same. The production and the trials of the personalized medicine will definitely be very expensive. It also requires advanced technology which may not be affordable to many. This would mean that the medical researchers would have to use less people with the trial cohort and this might compromise the reliability and validity of the results. In the end, there might not be any difference between personalized medicine and the conventional methods of treating (Hedgecoe, 2004).
    The companies which are ready to produce personalized medicine may face a few regulatory challenges. The regulatory bodies would have to change their policies in order to accommodate them. Failure to do this would mean that such companies will not have commercial or economic incentives to include Pharmacogenomics within their programmes for clinical trials. This will be the end of personalized medicine (Hedgecoe, 2004). 
Conclusion
Personalized medicine is an issue that has brought controversy due to the cost and the other factors involved. Although people might want to use it, it is very expensive and without the support of the insurance companies, few can afford it. This means that only a few people can afford it until the pharmacological companies lower the cost of testing and the prices are reduced. The law enforcers are yet to incorporate the same into law. This means that the companies involved stand a higher chance of loss in case it is not supported by the state in which they are in. While it might have all those challenges, there is a reason to support it because it can mean and end to the terminal illnesses. This would save not only on loss to the family due to high death rates as a result of chronic disease, but also to the economy whose labor force would not be eliminated by such diseases. However, research needs to be done on the negative health implications of personalized medicine. People should be made aware of the personalized medicine and its pros and cons for them to make informed choices regarding the same.

Trypanosomiasis

Trypanosomiasis is sometimes referred to as the sleeping sickness disease. It is a condition believed to have been in existence in Africa for several centuries and years way before the 14th century. The causative vector or agent was discovered in the years between 1902-1903 by Sir David Bruce. The differentiation of the subspecies protozoa was therefore done in the year 1920, while the very first effective drug for treating the disease was developed in the year 1910 by Sir Kiyoshi Shiga and Paul Ehrlich (Brian, 2004). This drug was later discarded for it side effects which were believed to lead to night blindness. Since then there has been several trials for coming up with curative drugs for the disease and numerous vaccines and drugs have also been developed towards this effect. The parasite causing the disease is known as tsetse fly. It attacks the human subjects through a bite and it spreads the infectious agents to various parts of the body such as the brain and the meninges which is the covering of the brain and the spinal cord (James, 1999). The disease begins to develop slowly and the symptoms might not be that visible until at a particular stage where it proves to be fatal if not treated early.
Africa has faced enough episodes of infection by the disease in the recent history and one significant example occurred between 1896 to 1906 in the Congo basin and Uganda. There has been other instance in the 1920s around several other African countries. Apparently towards the end of the 1920s, the infection rate slowed down a bit since there were several mobile teams who were going round the countries screening subjects deemed to be at risk. However, the disease had drastically disappeared within the years 1960 1965 (John, 2000).  This state was achieved following the success of the mobile screening team which had tracked down all possible infections and responded to them appropriately. However, the team relaxed a bit after they had observed that the disease had declined. Three decades later it emerged in an endemic form in several African countries
     The fly and the parasite causing the disease are more rampant in Africa and in a very broad belt across and around the Equator. Various reports from different health organizations indicate that since 2005, major outbreaks of the disease have been observed in several African countries within the equatorial region such as the democratic republic of Congo, Sudan and Angola (Hoppe, 2001). In central Africa the disease has also been observed in various countries such as Malawi, Chad, Uganda, Tanzania, and Cote dIvoire and the disease remains to be a major public heath challenge to these populations
The Human African Trypanosomiasis exists in two forms depending on the parasite spreading the infectious agents. The first one is referred to as the Trypanosome brucei rhodesiense (T.b.r) which is more rampant across southern and eastern Africa (John, 2000). This disease causes acute infection of sleeping sickness and it represents not less than 10 of the reported cases. The signs and symptoms for this kind of a disease are easily observed and they begin to manifest themselves a few days or weeks after infection. The disease spreads so rapidly and it has a tendency of invading and attacking the central nervous system faster than any other infection.
 The second form of the disease is known as Trypanosoma brucei gambiense (T.b.g) . This form of the disease is more common around west and central Africa. It represents 90 of all the reported cases and it causes a very chronic and acute infection of the sleeping sickness disease. The symptoms are hard to detect and a person can be infected for over several months and or even years without realizing any major signs and symptoms (Jeff, 1999). Symptoms emerge later when infection has already occurred and attacked the most critical parts such as the central nervous system at a very advanced stage. Besides the two strands of the disease, there is a third one which manifests itself in 15 central and South African countries known as the American Trypanosomiasis or the Chagas disease.  The disease causing organisms for this particular form of the disease is different from the two African forms. Thats why it is referred to as the Chagas or the American Trypanosomiasis depending on its geographical location.
The disease is transmitted by the tsetse fly which is the host parasite through biting. The bite is unlike any other bite from a fly since it is distinctively painful. However, only a few of the flies carry on the infectious parasites. Once bitten by the fly, the parasite is transmitted into the body system and it enters the blood streams where it is transported into the lymphs and to the central nervous system (James, 1999). Once in the body system the flagellates of the disease causing agents reproduce themselves while in the blood streams, and any fly which bites an infected human being gets itself infected and after a period of four to six months the fly becomes capable of infecting a mass of other persons.
The tse tes fly bite is distinctively painful and it can result to the development of red sores known as chancre. However, the different types of trypanomiasis manifest or exhibit different signs and symptoms depending on the kind of infection. In the case of the east African trypanosomiasis, various symptoms do occur within three to four day after infection. These signs and symptoms include severe headache, fever, irritability, extreme tiredness swollen lymphs, aching joints and muscles (Hoppe, 2001). Body rash and weight loss are equally common. At an advanced stage, infection of the central nervous system may result to confusion, slurred speech, difficulty in talking and walking, personality changes and the like. If not treated early, at advanced stages it can be fatal and result to death.
In case of the West African trypanosomiasis, infected persons may portray or develop simple forms chancre within two to three weeks after infection or after the tsetse fly bite. Other symptoms may begin to appear several months or weeks later such as  rash, fever, severe headache, headache, joint and muscle pain, loss of concentration slurred speech and even confusion (James, 1999).  If detected early the condition can result to death several years and months after infection.
Despite the fact that the disease is too rampant in Africa, it has some favorable conditions which highly nurture its growth and spread across Africa. First and foremost, the disease targets populations residing and living alongside dense vegetations near rivers, lake sides, thick forests and vast wooded savannah plantations. However, the disease is also too rampant among the rural populations residing is agricultural areas, fishing, animal husbandry and hunting (Brian, 2004). These populations are more prone to infection than any other population since the disease causing micro organism is also hosted by animals and it can be transmitted from livestock animals to human beings. Sleeping sickness does become too rampant in areas where health care standards are poor and infection rates increase due to lack of the means to contain the spread of the disease (Hoppe, 2001). These areas include, displaced populations by war, poverty and natural calamities leaving in poor settings such as refugee camps.
Sleeping sickness poses negative effects on human populations such as negative impacts on the economy of a country and deterioration of regional and national economies. Death as a result of sleeping sickness affects and brings changes in the general demographic populations and age structure of a country and consequently it leads to negative impacts on the general supply of skilled labor to a nation just mention but a few.